New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transaction writeset store #3903
base: master
Are you sure you want to change the base?
Conversation
运行下 ./scripts/check_commit.sh |
Benchmark for cc04b9cClick to view benchmark
|
@@ -138,9 +143,11 @@ static VEC_PREFIX_NAME_V3: Lazy<Vec<ColumnFamilyName>> = Lazy::new(|| { | |||
TRANSACTION_INFO_HASH_PREFIX_NAME, | |||
CONTRACT_EVENT_PREFIX_NAME, | |||
FAILED_BLOCK_PREFIX_NAME, | |||
WRITE_SET_PRIFIX_NAME, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里确认下可以直接加吗?dbupgrade相关逻辑
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
刚看了下升级这块,应该都是涉及到修改现有的结构,新增的话应该是不需要upgrade流程的
|
||
fn to_write_set(access_path: AccessPath, value: Vec<u8>) -> WriteSet { | ||
WriteSetMut::new(vec![ | ||
( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
table item这个测试是不是也需要加一个
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已加
executor/src/block_executor.rs
Outdated
@@ -23,6 +26,7 @@ impl Default for BlockExecutedData { | |||
state_root: HashValue::zero(), | |||
txn_events: vec![], | |||
txn_infos: vec![], | |||
txn_write_sets: HashMap::default(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么用HashMap,之前都是vec!, 这里有快速查找和插入需求吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里因为一个block涉及到多个transaction,map实现起来比较方便
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是L80还是哪里?方便给我说下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里改成了vec,没改之前的考虑是方便说外面一对一的去取;改了之后的考虑是1. 如果有重复hashvalue则不会覆盖造成数据丢失,2. 存数据库时也是用的顺序存储
Codecov Report
@@ Coverage Diff @@
## master #3903 +/- ##
==========================================
+ Coverage 53.68% 53.77% +0.10%
==========================================
Files 618 622 +4
Lines 67991 68216 +225
==========================================
+ Hits 36492 36675 +183
- Misses 31499 31541 +42
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 65 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
Benchmark for aeda101Click to view benchmark
|
chain/src/chain.rs
Outdated
@@ -505,6 +506,10 @@ impl BlockChain { | |||
|
|||
storage.save_block_info(block_info.clone())?; | |||
|
|||
for (hash_value, write_set) in txn_write_set.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can consume the txn_write_set
, becase it will not be used any more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passed it into batch save function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check out the comment. @welbon
chain/src/chain.rs
Outdated
@@ -505,6 +506,11 @@ impl BlockChain { | |||
|
|||
storage.save_block_info(block_info.clone())?; | |||
|
|||
for (hash_value, write_set) in txn_write_set{ | |||
storage.save_write_set(hash_value, write_set)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
看起来 storage.save_write_set可以弄成一个batch_write,你参考下save_transaction_batch看看是不是这样, batch_write主要是少了几次rocksdb调用,这个调用虽然比较小,再就是现在是一次性写入,现在是rocksdb异步刷盘,不是每次都flush磁盘,如果是同步就会每次都flush磁盘就很慢了. @welbon @simonjiao
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里考虑上线之前加一下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里加了batch处理
It has some big change about WriteOp in move. WriteOp originals has two kinds of operations, Value and Delete. Now It becomes three kinds, Insert, Update and Delete. I think starcoin will consider to upgrade it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LGTM |
Pull request type
Please check the type of change your PR introduces:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Other information