we apologize for all computer
incompatibilities around the world. With
that in mind we’re only six minutes late
starting so I’d like to get going. To give
you an idea of what we’re going to do
today: this is what I hope is the first
in a series of many discussions that
will become public for the development
teams working on Bitcoin Cash and the
main goal of this meeting is to discuss
the potential items for the next hardfork/upgrade to Bitcoin Cash to
determine which items are
realistic to consider for inclusion in
the may 2019 upgrade and determine the
status of each of the items listed and
if further discussion is required to
solve any issues that there might be. So
the the specific issues will go through
them one by one. And I’m going to start
now with doing some introductions and so
my top left-hand corner is Jason Cox.
Jason if you can introduce yourself?
sure. I am Jason Cox
Bitcoin Cash developer currently
contributing to Bitcoin ABC. Thank you.
Antony Zegers. oh hi Antony Zegers. I’m known as megarian online in
forums and stuff and yeah I work on
Bitcoin ABC. Amaury introduce please?
yeah so I’m a Amaury I am the lead developer for Bitcoin ABC.
Mark? Hi I’m Mark Lunderberg.
I’m just sort of getting into
Bitcoin Cash and trying to help out with
the development process. Just getting started.
Thank you Mark
Emil? So I’m Emil Oldenburg. I’m the
CTO of Bitcoin.com. Okay thank you.
yep Chris Pacia. I work on Open Bazar
and also the BCHD Bitcoin Cash full node. Okay Thank You
Andrea? Hi everybody I’m
Andrea Suisani. I’m a Bitcoin
Unlimited developer. Thank you. So a small
group with us today and we’re going to
cover a number of subjects as I said
earlier the main goal is to
determine what’s realistic to be
included in the next upgrade. The items
that are going to be discussed, I’ll just
run through the list of them first and
then we’ll dive right in. The first item
is BIP 62 items. And I’ll just read what was
written: does it make sense to activate
the remaining items null dummy and min
minimal data push – should clean stat be
reversed. Second item on the list is the
100 byte transaction size. Should that
be changed what is the best approach for
this. Third item is Schnorr. Is there any
chance this will be ready in time? What
needs to happen to progress this item?
And fourth item is opcodes. Is anyone
motivated to take responsibility for
these? Someone needs to take ownership
and work on it if it is to be included in the upgrade. Should only be activated
MUL, INVERT question mark. And then the
last item is there any desire to rework
the SigOps-accounting?
So I think we’ll throw it open for discussion and
with BIP 62 items. Would anyone like to
dive into discussion on BIP 62?
Jason. So just to get started I wanted to
make sure everyone here understands kind
of what BIP 62 tackles.
My understanding is i it’s primarily about
malleability. And recently what was
implemented in the last hard fork was
the implementation for clean stack and
enforcement of clean stack. I also
understand that enforcing this has
caused some addresses to be unbendable.
Like people spending SegWit UTXOs
on BTC sending the same UTXOs on
Bitcoin cash is now impossible. If
someone thinks this is incorrect or
inaccurate please add to that. Because
that’s my understanding so far. Well it’s..
Yeah just to clarify I think it’s
basically when people accidentally send
Bitcoin Cash to a SegWit P2SH-address. So yeah that shouldn’t be a
normal, thing but I guess people
that are doing that by accident. And the
the clean stack prevents miners from
being able to just to like save the
people’s money. So people cannot redeem
their their SegWit coins on BCH
right now. Well if you send… if someone
accidentally sends Bitcoin Cash to a
SegWit address, like basically you need
to get a miner to help you recover that.
Because it, anyway it’s hard to explain,
anyone could mine it I guess. Like
the miner, you have to have a trusted
miner to be able to get your coins out
of there. But I think that there has been in
the past a few miners that have been
helping people out to recover their
coins when they when they do that. But I
mean I guess I don’t know if that’s
still…I mean I guess my suggestion would
be someone needs to find out
if this is a big problem. Or if there are
actually miners still able, like willing
to help people with it. I don’t
really know what that information is. As
far as I know BTC.com is still
doing it. No. It’s not possible to do
it since the last fork. So I think we have a
first action item here: Is to make sure
everybody is aware of what’s going on.
Yeah do we know the scale of the impact?Like number of users? Amount of
money that is now un-spendable due to
this? We have no way to know. Okay.
We would need to index all SegWit addresses
first and then check the UTXO. So
that’s quite… well it takes a while.
But there’s no way to know. Because they are
P2SH. So you know….(inaudible)….address.
Does it apply to both the P2SH and
regular SegWit? No just P2SH. So
I guess in terms of BIP 62,
I guess my my impression of it or my
take on it is there’s basically no point.
Like you kind of should do all of it…
Like if we have some hope of doing all
the items then maybe there’s some value
to that, but if you still have even one
malleability thing left, there’s not much
value to just doing one or two. So I
don’t know, like if we’re not… unless
we’re gonna do… unless there’s some
motivation to do all of them, maybe it
would make sense to change this. But yeah it would also make sensitive know what
the impact is.
I just like to say one thing which is:
it is in principle possible to have
all the BIP 62 relevant things like to
have all the third-party malleability
fixed without breaking any coins. But
it requires a little bit of a
different approach. So it would be like a
new, you know, it would be a new hard fork
to to move to that sort of approach. So
one thing that might make sense is even
to just roll back what we have right now
and then put in something better later
on, or I don’t know that’s
possible. So for that something better
are you saying like a new transaction
format or something like that? Oh no, just
well so for example you could say that
you only apply at a clean stack rule to
the pay to public key hash and pay to script hash multi-sig, very standard sort of
transactions. That would be one way to do
it. I don’t know if that’s convenient but
you would say that every other script
would have to manually check by itself
using Op_ Depth and if you don’t
want to be malleable or if you don’t
want your transaction to be malleated
then you have to use that sort of
additional mechanism in your script. So
it’s kind of a workaround but that would…
..that’s in principle possible.
Is there any further discussion on this item?
well I think that what Amaury said about
an action item, we should maybe try to
make that concrete so that we actually
do… someone actually does something.
So I guess that it’s a matter of
understanding if we… do we want to fix
all the third party
malleability that BIP 62 is fixing?
If yes then we have to go forward and then
have a measure
of the amount of coins that are locked in
P2SH SegWit addresses. if it is
possible some way. And once we
have these two data points we can
decide what to do. Otherwise if there is
no coin locked in these P2Sh
SegWit addresses, well we could just
go ahead and keep the fix
that we have now and then once we assess
if we want to the other fixing in the
next hard forks well we could do it. But
we have to assess what we want to do
first. and then in understanding of
what’s the measure of the problem
that we are tackling. To kind of add to
that question of the impact: is it only
the SegWit style addresses that are
impacted or are there other use cases
that we haven’t discussed. I know the
SegWit one is the common one. Because if
there isn’t maybe that helps limit the
known impact. So the flag that has been activated
has been standard for quite some
time.So you need the miners
cooperation to spend those coins no
matter what. Okay. That’s why it’s not
that hard for miners to add api’s and
tools for their users or anyone actually
to just submit those transactions. Is
that something, like who would… is that
something you guys could do Emil? Like
try to, since you have actual users yeah
It is actually something
that we have considered to add an API
where you can submit it on standard
transactions. But only like if
they follow like a specific format so
we don’t mine crazy non standard transactions.
I was just talking about finding out if
there’s a problem with people
accidentally sending their coins to
these addresses. If that’s an issue or
Have you guys encountered that? We
got a few requests after the Bitcoin
Cash hard fork
but that was a long time ago.
Yeah nothing recent. I don’t actually know the status.
You would need to like
index all SegWit addresses on the BTC
Network first and then run that with the
UTXO on Bitcoin Cash. So there’s a
quite large data set to go through.
Some people are doing that deliberately
though because the SegWit addresses are
“anyone can spend” on BCH. So what was
happening in a lot of cases where as
soon as somebody spent their SegWit
coin from the same address, some miner,
some unknown miner would come in
and just gather up all those accidental
Bitcoin Cash coins on those UTXOs.
Because any miner can take them once the
Redeem script is revealed. So
actually now we have a better situation
that we had before the last fork. Because
miners can’t take the
coins for them? Like they are stuck for
everybody right? Yeah. But is it better or
worse? I don’t know. It’s different. Sorry
I should have chosen a
different word but it’s different. that’s
So in addition to the impact we should
be determining how many people perceive
this as as something that needs to be
fixed. Because you know maybe there’s a
large number of UTXOs potentially
Impacted but if people don’t feel strongly about
it then maybe it’s not worth fixing.
I asked that from an addict my
experience in in the first few months
after the August 17 for have heard from
fellows and I’m also other people on the
internet right whatever complains about
the fact no complaints but basically
people that wrongly send BCH to be DC
but this fade-out like since we have the
new address for mark and other thing and
people get more used to the fork
basically but it could be just that I am
you know no bias bubble but I know and
it’s being a very long while since I’ve
heard someone tell me that there is some
kind of accidental sending from BCH to
BBC but just nothing more all right with
that in mind suggestion to move on to
the next item
there’s no objections on enemies before
we move on maybe we should find an owner
real quick just for someone to follow up
on the impact ahem oh are you able to
take that is that something that coin
comm would be interested in looking into
I’m not sure we can commit to that right
now though we have a lot of things in
our place yeah so basically I guess the
status of this item is we just need more
seems like all right with that
we’ll move on to the next item then the
hundred byte transaction size should it
be changed and what is the best approach
for this and who would like to start off
on that in I can start off on that yeah
thanks yeah so when the when the last
consensus gene
I was pointing out this 100 light limit
you know in principle it could affect
some transactions so there were like 10
transactions since they August 2017 hard
fork or something like that
ten transactions only that were less
than 100 bytes I think for them were
coin basis something like this so so it
could be – it could be relaxed – 64
bytes to have the same intended effect
so just to remind everyone the intended
effect here is to prevent a technical
vulnerability in the Merkle tree that
you can have a what is it you can have a
node that looks like a leaf or something
like this I don’t remember the exact
thing but you could you could relax that
to be a 64-bit limit and that would be
certainly enough for everybody I think
yeah so this relax rake rates headache
for the mining pools because there’s
another rule that says that I think it’s
the coin base input can only can Mac and
it only B has to be more larger than 100
but no wait
it cannot be larger than 100 bytes and
the total transaction cannot be smaller
than 100 bytes so there’s like a lot of
extra rules that you need to add the new
mining pool to make sure that you don’t
accidentally mine coin base that is too
short because the way mining pools work
is that you configure the mining pool in
a config file to specify your mining
pool name and if you start a mining pool
you want to be anonymous you don’t put
anything in the coin base so if you want
to so it means that you need to fill up
your Quintus matches message with random
garbage if you want to be an anonymous
miner because there’s only anonymous
miners that risk mining a coin base
smaller than one our advice
like if you add your your mining pool
name like polar they’re gonna calm this
is usually not a problem but if you
don’t want to add anything risk invalid
logs so would it make sense
dropping the side transaction size limit
to something like 80 bytes is there any
use cases between 64 and 100
I’m not personally aware that you but if
you’re going to change it why not just
do it the yeah the the in the since we
have to change it why not saying
everything higher than 64 is okay rather
than put an arbitrary 80 or 90 we have
to change it I would even go further and
say everything different from 64 bytes
or yes I find it more logic rather than
put another random numbers yeah I’ve
heard this argument a lot and it does
make sense to me the only thing is is
usually you want to design a you know a
critical system like this to have the
behavior that you’re looking for and it
should match the use cases that you want
them to match so when we’re talking
about transactions that are below say 80
bytes and I’ve asked a number of people
this like what are the use cases for
these really tiny transactions and I
haven’t heard anything very useful other
than the exploit which I guess is the
use case technically so by simply saying
you can’t have a transaction smaller
than 80 bytes you’re limiting any
strange behavior that was not intended
by the system makes it easier to devise
the system to be anti fragile it’s not
that we’re tackling any particular
exploit its tackling exploits that are
currently unknown the thing is that we
are conflating two things here before we
introduce the the constraint on the
transaction size what was the situation
like there were no that that I’m aware
were then I am aware of the window
constraint um on size like 86 70
whatever right didn’t have it okay cool
so we are saying now that we probably
made our stricter than we should have
done and we are going to relax it and
yeah this is the first thing the other
thing is since we are designing a very
mission critical system we want you to
while we are changing again we have we
need to think carefully if we want to
move to the situation we have before or
we just want to relax that the
constraining a little bit like 80 bytes
and then I see your point but it’s like
we are mixing the two things here we are
there is 2 2 2 2 thing on the plate the
first one is we want to go back to we
where before and the second one is oh
maybe where we were before wasn’t so
thoughtful like we should have had in
the first place
so just want to underline that the only
thing I’d want to add to that is it’s
easier to to drop this limit it is to
raise it so if we let’s say we do drop
it all the way down to 64 and then we
later find out oh there’s a somewhat
malicious use case add let’s just say 65
raising that limit back up is a little
bit more complicated in terms of
deployment but other than that there is
not a strong case for it but you say
that is more complicated because you are
more strict but but since we have I
think that we are going to introduce
this hypothetical rising in this
constraint be an r4 right so I guess
that yes it is more complicated but it
is not a fork so people need to at least
four fou notes talking about SPD wallet
will be different probably but yeah
what’s a more complicated is that
because you need to you need to search
through the mempool and find the
violating transactions or something at
the fourth time is that what it is or
but is it something that we are yeah the
this is this is a complexity that that
should arise one honor in the period of
time when the fork is activated yes I
think she’s one of them but it is
something that as as far as I’m aware
hold it all the fool knows that I didn’t
know the code oft have a mechanism in
place to encourage the memo during this
this activation time and also if there
is a rollback that there is the that the
codes slacks look after it like dealing
with rejection differences between the
two states but yes this is one one one
complication indeed anyone else like to
weigh in on possibility of changing the
under fight transactions that I guess
just says
implementation detail is it gonna be the
case that it’s cut the the size is kind
of retro actively reduced so like I mean
barring putting any kind of checkpoint
aside like after May would you be able
to go back and start a fork from before
May with sixty four limit or something
like that or would it or would we have
to maintain two limits like if the
height is between these particular
ranges then the limit is 100 if it’s
after May you know then then the the
limit is 64 type deal so in terms of
software maintenance I think it would be
really bad to enforce the retroactive
limits I don’t think this would be
positive for any implementation to do so
so in theory yeah if we changed it to
only 64 bytes being the excluded
transaction size then you would in
theory be able to go back to the fork
point and start mining you know 65 or 80
or 90 by transactions that would be
valid at least that’s the way I
envisioned it so are we able to
determine an owner for this you know
just just saw it like that this doesn’t
have to be a person that implements it
on on all the node software for example
it’s just someone who kind to kind of
drive it and stay in communication with
everyone to make sure it’s done maybe
write it writing a spec some notes on
why we decided to you know tweak this
constraint I could take it okay thank
you that’s great thank you any comment
on this not really no I mean I guess my
overall take on it is that it’s yeah
like maybe it was
maybe it was not a video wasn’t the
perfect thing to do but I sort of wonder
if now that is in place if it’s if
there’s really like a strong motivation
to change it again but I guess that’s
part of what what Andrea can investigate
maybe I like the last one I just feel
like we don’t have enough information to
really know next item on the agenda is
the nor is there any chances we’ll be
ready in time what needs to happen to
progress this item I’m going to take a
stab in the dark here Emory would you
like to speak on it
yeah so there is an implementation of
Nohr that have that I’ve made like more
than a year ago now it’s not been
through the kind of review required that
I would feel confident to deploy that in
the wild at this point in time and if
that doesn’t happen very soon then we
won’t be able to deploy or it may we
have like one month and a half in
addition to the algorithm itself we need
to integrate it into existing of cards
so it’s it requires some time in itself
so if their review of their algorithm
itself doesn’t happen very soon this is
not gonna happen
um can you briefly touch on the news
cases for the people listening in on us
so Norris and all their signature
algorithm Denis CTSA it’s it’s more
flexible in many ways than a CSA the
reason why ECDSA become more of a
stand-up and snore is because for quite
some time Stoll was patented so this is
actually the number one use case for a
CSA and the reason to be for ECDSA
is to provide an automatic to snore it
that is that is not patented so snore as
advantages in term of validation because
we can do what we call batch validation
meaning you can take for example edit
signature and do some computation that
verified the X signature and that
computation is not as expensive as
checking a time when signature right so
when we have plenty of signature to
check which is the case when we receive
a new block then this is very
advantageous to be able to do that
that’s that’s one big advantage from a
user perspective it’s also interesting
because user can do aggregation and
releases and stuff like that that look
like just like regular signature so this
is an increase in privacy for those
users and this is also better for the
network itself because it just has a
signature to verify regulus
if it’s a wicked are a two thre
multi-sig over in in all cases it look
for the network like the same and it’s
just one signature to check
the other discussion on I guess I’m just
curious like further review like do you
basically need guys to people who can
look into the like the crypto math and
and all that kind of stuff kinda what
you’re looking for heard when the mass
itself has been out for many years now
so that part is fairly well covered but
when it comes to cryptography you need
to implement that in in ways that are
very specific you need to make sure it
you don’t leave somewhere in memory some
piece of secret data that’s you know
some some water cut on the machine could
not you know go rumbles for the memory
that you left behind you and find some
secret data you need to make sure that
you implement it in such a way where you
have no branches and no memory access
that depend on the secret because then
you have side-channel attacks through
the branch predictor of the CPU and
through the cache hierarchy of the CPU
which likewise allow some third party to
recover information about secrets so
there are there are all kinds of there
are all kinds of very specific stuff
that you really wouldn’t care about in
general card that you need to be careful
about for this kind of card so it needs
and those are not stuff that you can
test really so it needs extremely
careful review yeah is the kind of
really it means way more really than
regular piece of cake
just a comment on whether the the parts
that are necessary for for consensus for
validating blocks and checking
transactions that sort of thing those
wouldn’t have any secret data so do you
feel more confident about those parts
like the parts that are not generating
let’s say yeah so that’s true there are
water pitfall for those parts but
obviously they are not pitiful is
those pitfall maybe no for instance they
are various places in the cut where you
would hash some value and then check
that the result of the hash is a bad
it’s color
spare aspirant addictive curve that we
use and the thing is you know the
scanner is like not to poor g56 but a
number that is slightly smaller than
that but it’s never you know that stop
with a bunch of apps so it’s actually
very difficult in practice to find the
preimage of the hash that’s not fall
into that French but this so so it’s
very difficult to provide actual tests
that are gonna trigger that card right
but and you know all that correct anyway
because maybe at some point some dude is
going to find that the one value for
which it can produce ash within the
right range at this point you get a
chance please so yes so that’s that’s
another kind of people that that you
need to be very careful about that it’s
gonna be uncovered Theory HUMINT and by
testing because we don’t know of any any
you know preimage that fall in the right
range at the moment so I mean we’ve got
a I guess a couple different ways a that
Schnoor signatures can be implemented it
seems like it seems like the simplest
way is to kind of just overload the
existing map check sake but that also
seems like the most dangerous way to do
it too because we’re essentially
exposing all UT Xers to this new code
III don’t know what you guys think about
like you know the the security of that
or not I mean that seems like the the
nicest way to do it if it was if this
was like really battle-tested stuff but
it worries me a little bit that it’s
like new and it’s exposing all the old
duty Xers to it yeah that’s why this has
to be done perfectly yeah I would add to
that that the security assumption made
by snore are the same security
assumption made by a CSA so if you were
to find a way to break the nor sing in
tourism that would most likely means
that you can break the CGSA algorithm
that we currently use I guess you did
make one comment though about you know
reducing the number of branches I know
this isn’t exactly the same but by
implementing another signature scheme on
old UT EXO’s kind of feels like we’re
introducing a branch where it’s
basically you can you know there’s a
couple different outcomes you could get
in order to sign a particular
transaction for old UT EXO’s but it
doesn’t feel right you know you even
know what you said is the assumption is
the same for both it just it does make
you kind of squirm so those branches I
don’t see them as the most risky ones
because this part of the code is wanna
represent the terminus teeth if we are
talking about the interpreter so it’s
very easy to it’s very easy to you like
all the unit tests that we need that
make sure that this part of the code is
not gonna do something real yeah
generally stuff like interpreters and
compilers are you know very easy to unit
test extensively so I’m not too worried
about putting a branch there I would be
more worried about you know being a
branch in the network here the DB there
are you know actually think that is
multi-threaded or or depend
Oh faster body reacts you know but but
that part is very easy to fit input and
check the output anything further on
this item we will need an owner because
armory has been asking for review on on
a short code for a little while now
someone to kind of drive this review
home over the next month and a half
like you said otherwise the review can
be ongoing but it won’t make it for the
next part fork so it depends if we view
this as you know valuable enough to put
a lot of weight behind it in short order
yeah I can take that I’ve been writing
the sort of the ocean or opcode spec so
far and I think that yeah there’s the
there’s a little bit of controversy
there with exactly how it’s done
but um perhaps if the concern right now
is getting a sort of a cryptographically
secure implementation that can I cannot
review that at least and try to try to
get people on board with that yeah no
I’ll talk with you after because I I
want to talk about getting more
reviewers maybe even potentially some
people outside of the Bitcoin cash space
because I think this is something that
can be reviewed by you know
cryptographic experts and that sort of
thing yeah yeah and I think it’s fairly
clear from this call today that you know
there’s an invitation for people who are
going to be watching the recording or
any of the attendees right now if they
have an interest they can contact you
guys directly on that so if there’s
nothing further on Schnoor at this time
we’ll move on to up coats the old up
coats and Jason maybe I’ll get you to
help me with
this and what I have written down here
is anyone motivated to take
responsibility for these someone needs
to take ownership and work on it if it
is to be included in the upgrade should
only be activated should only some be
activated for example Mel or invert
right so we actually have diffs
available for it I believe all of those
codes that that were recommended for the
hard fork there is review that needs to
be done
there’s tests that need to be written
but other than that the implementation
is more or less completed as my
understanding that go somewhere you can
correct me on that if I’m wrong so
really we just need someone to own this
make sure that there is a complete spec
available that there’s plenty of unit
tests that are available so that all the
implementations can go and implement
these and make sure that we’re all doing
the same thing this is mostly an
ownership issue as opposed to you know
writing code and implementing it so so
there is one issue or at least potential
issue in the order flow semantics this
is something that I’ve been raised in
the up group at the time when I’m Shane
came to us with those are codes because
the number system used one complement in
Bitcoin instead of to complement like
created our regular way of like you know
all the state of the art and family
always working on compilers essentially
it’s completely moot completely useless
we need to have someone look into the
overflow behavior and make sure that it
does make sense and it does what we
expect as long as there is nobody that
is willing to do that or it can be
rewritten too
implement stuff like that can overflow
things like invert for instance or
shifts can be implemented but they need
someone to take ownership of them and
track that to make it happen
yeah I agree I guess just to throw my
two cents don’t have this much value but
I guess my impression of this whole
thing is that everyone essentially
agrees in principle with with having
these but no it doesn’t seem like
anyone’s super motivated to actually
make it happen um so yeah that’s kind of
my my impression the other thing I was
wondering I don’t know I just really
shift the whole shift and her shift
seemed like there was some discussion
about whether that was done in the best
way or not I don’t know if that’s an
issue or not either yes that was that
was discussed and the question is
because the number not only on one
compliment but are all little endian
even internally you you end up with
ships that cannot work with binary blobs
and numbers the way you would expect on
a written or instruction set so at some
point you need to choose and it’s gonna
be broken for one of the two and the
decision that were made at the time was
that it’s more useful to use shift on
binary blobs than it is on actual
numeric values and so this is what this
is what people went for at the time I I
don’t think there is any new information
that you validate that conclusion that
came up since then by any chance do you
recall in every year the use cases that
people had envisioned for the binary
shift binary plug shift yeah so
generally like you may want to use some
data on the stack and a circuit at that
int PC is to verify some part of it or
aggregate it so maybe in the case of an
Oracle for instance
we have an Oracle and so you have some
data that is provided and you have a
signature on those data and then you
have a part of the script that verified
that those that are contained this or
that information so in those case stuff
like you know split and shift and stuff
like that that allows you to select
pieces of the bomb are very useful yes
that’s the main that’s the main use case
but you can achieve that already with
split that’s pretty it’s a little
redundant in some cases yes we can do
that we split split only allows you to
do it at the bike granularity so maybe
like if you want to put the series of
flag for instance you may not want to
have one pipe or flag for instance do a
shift and a mask and I know you get the
value of the flag you can do you can do
more polymer stuff with shift then you
can do with split but but you’re correct
like it’s not like it’s any bling
anything new you could do you could do
all of that without the shift just like
you can do app model with a bunch of
additions and a few if statements but
it’s really useful I guess so that’s I’m
looking for an owner I would actually
like to take this one myself except my
my time is constrained and kind of
stretched and across some other things
at the moment I think everyone here
might be kind of in that same boat but
does anyone know anyone outside of this
meeting that may be interested in in
taking this as an item writing the spec
and making sure the test coverage is too
on those facts I could ask to other bu
if there is something interested in but
not sure that because we we included
ain’t a implementation in the SB client
that we produced but we just bring the
code and plug it in just to be sure to
be compatible bark for dog like we
didn’t change anything in terms of code
the review has been done but not as like
a Maori said not as tofu thoughtfully
like it should have been so it could be
that some some of the the guys maybe you
wanted to do that bar okay I can’t say
for sure I could ask yeah could you do
that please and get back to me with that
because maybe we can coordinate yeah I’m
finding an owner for them okay moving on
to the next I don’t know there’s no
further comments is there any desire to
rework the sing ups sit ups accounting
Antony brothers forward do you want to
speak to it first I mean again it’s one
of these things that that it’s kind of
always been hanging around as an issue
and it’s it’s not really urgent but it’s
I don’t know I guess I just figured it
was worth the list um it’s a little
weird right now how this fig ops are are
counted doesn’t really sense in in a few
ways so I guess in the long term that it
seems like it’s something that should be
dealt with eventually but it also
doesn’t seem like it’s super urgent so I
don’t know if anyone else has thoughts
on that
to just kind of add to what you said
basically the sig ops counting is done
on a per megabyte basis so you’d like
packages up the first megabyte of
transactions and council sig ops and
then does that for the next megabyte
what it really should be is the sig ops
over the entire block and making just
making sure that the sig ops per per
megabyte is you know lower than a
certain value but it’s it’s not the only
problem though that the way see cups are
complete makes your own sense whatsoever
it’s overly complex I guess the idea
like yeah like the way it come out it
doesn’t really make sense oh sorry
but seems like if you’re gonna change
titanium as well make it right so I
guess that’s the issue is it’s a bit of
a bit of a bigger change than just yeah
so making it by making it right you need
to essentially count the number of
cycles as you execute them to know what
number of sig ups your thirty did in
that large because right now it’s
counting it’s counting sig ups in the
output of the transaction that are not
executed in the block it’s not contain
various C jobs that are in the input
unless there are PD SH input in which
case they are counted that the whole
stuff makes your own sense and and
doesn’t even reflect accurately the
number of C cups that are required to
validate that not you have to summarize
it’s basically just a bunch of fat
heuristics yeah like the multi-sig thing
is weird it accounts to 20 all the time
no matter what and and stuff like that
reduces our so anyway I guess I just in
keeping track of what’s good like the
various items I just thought I would
raise it as an issue but I don’t I
thank anyone you know I just just to
keep it on there on the radar but yeah I
don’t know if anyone like it has an
interest in working on that or not I was
aware that Angie stone was thinking
about it and and has some idea on
improving it but as soon is not not
here’s I can’t speak for him another
question is please is it actually
possible to follow the path of correctly
counting the seagulls
why are executing a wild validator block
or or is it an over killing and having a
better heuristics set of heuristics
would do the trick
like do we really need to go through the
exact accounting for all Zig offs or
something and estimate with a better
estimate would be would suffice No
so counting you know like doing +1 to
some volleyball when you verify a
signature is like probably not even
gonna show up in a key kind of profile
right but the way C graphs are counted
right now not only is like not I create
but it’s actually fairly expensive
because you need to parse all the
scripts twice want to execute them and
wants to count the C cups with eristic
that we are using so it’s boring gonna
be faster to do it as we execute there
is one there is one tricky there is one
like yeah
there is one tricky situation that we
need to make sure we take care of is
when you transaction in your main pool
and you cash the result of the script
execution you need to make sure that you
also remember how many C cups
how many seagulls we’re done during that
script execution so so terrorist we need
to extend the cache to cache also the
counting of shakeups for the the cache
of discussion and introduction yes yes
so we need to make sure that the cache
keep track of the seagulls count if we
want to catch anything but beside that
there is no matter although and it’s for
gonna be cheaper and again not just more
accurate okay yeah I mean I mostly agree
that it’s something that would be nice
to fix at some point I don’t know if if
this coming hard fork makes the most
sense because it seems like something
that takes quite a good deal of planning
like much more than say those opcode in
particular I did have a question with
Jason was talking about with the way it
currently handles it on like a per
megabyte me it sounds like my my code
might actually be wrong on this because
what I do is just take this nigga ops
per megabyte and like multiply that by
like the expressive block size to get
the max sig up count is that is that not
the way you guys handle it yeah I know
that’s not correct the way it’s done
right now is that you take the block
size you round it up to the next
multiple of one megabyte right so if if
the block is like 1.2 million right for
instance 2 megabyte and then you apply a
limit of 20,000 see gobs per megabyte if
the limit that you computed is 2
megabytes you multiply 2 by 20 thousands
and you have to that you can X I said so
it’s not based on on your accessory box
size then no so the way you do it
is only fine as long as you follow the
chain if you want to be a miner if you
want to mind with B CH D you need to fix
that yeah okay yeah I think that we just
demonstrated the confusion around the
current implementation like we would
like to fix it so it’s much simpler like
what you described I’m actually
surprised by how generous the limit is
it’s like one one check stick for every
50 bytes or something like that which is
more than you could normally do you know
yeah so there are reason to do that
mostly due to historical factors and to
the ways hiccups are called it so he can
actually get at the city of C cups that
is higher than that very easily the
reason is it counts cigars in odd foods
so if you have a bunch of pay to script
a short with each of them is like less
than 50 bytes so if you have a bunch of
try a transaction with a bunch of
outputs like you know a ratio where they
have way more up to the inputs it’s
actually you cannot run it to believe it
stupid it’s it’s because it’s counting
the wrong stuff any other comments on
reworking this thing up accounting okay
I also have listed any other items to
consider and this is specifically for
the May 20 19 upgrade I have one point I
would like to bring up or at least float
the idea so currently since the Bitcoin
cash hard work we have kept increasing
the the block size but one limit that
has not been touched is the shame of
unconfirmed transactions so I would like
to float the idea that this is that this
limit is raced the problem is like we’ve
been doing some experiments with this
and it’s a big headache
if nodes are not configured the same so
the only way of actually doing this
would be that everyone activates new
rules at the same time which would be at
a heartful time so this is not a hard
work rule or anything but it should be
if this limit is raised it should be
raised at the same time as it should be
activated at the same time as the hard
work just to make sure that all the
nodes have the exact same configuration
yes so you’re correct we actually run
into that before when we change the size
of up return where you do it in
synchronization with the are fork even
though it’s not a processes change per
se but if you don’t do it with an
activation point you end up essentially
completely breaking zero curves on the
topic of specially chaining transaction
I agree that we want to get rid of that
leave it at some point however right now
because of the way the software is
written every time every time you accept
new transaction or remove transaction
from the limit when you do a graph
or all the children’s and parents of it
and so if you don’t limit the death of
that you may end up doing it’s only
expensive like it’s it’s not a
competition that grow linearly it’s like
exponential factorial or some stupid you
know complexity like this and so you
wrote very very quickly you expose
yourself to a lot of resource usage when
when you increase that limits I would
rather rework that card so it doesn’t do
anything stupid and then get rid of the
limiter together
yeah we are there was actually a study
done not too long ago I’m seeing if I
can find it but someone had profiled you
know different different chain links to
see how poor the performance was and it
it gets really bad if I remember
correctly it around like 50 or 70 chain
transactions but that said Amal do you
know if there is any direct positive
impact to raising it because it’s like
currently at 25 does it make sense to
raise it to 3540 or is that raising it
just by that much not enough yeah so
like we do get some support ticket once
in a while so like when if you for
example try to place other food ice too
much using our wallet or any kind of
wallet at some point you will probably
reach the the maximum chain transaction
rule and you will get weird error
messages that the users the users
doesn’t understand and they’re just
angry and email support like why can’t I
place with those you guys what is wrong
and that’s because of the dice if you
win you get one transaction back so you
can only play it so you can only play it
like 12 times and then you hit the limit
so if you win 12 times which you can do
with you playing on the the easiest bet
then you’ve got to have a less user
experience and we are so like I know
they have that problem and we get the
support tickets for it’s in our wallets
but also we are ourselves building
another on Shan dice game and we are
also building out under launch pain
services that were required it would
help if if you can send a few chained
Chris actions yeah there are
some are some web wallets as well that
generate a bunch of chain transaction
I’m thinking even though they went with
SV I’m thinking about money button for
instance that you know every time you
use the money button you chain the
transaction with the previous one that
you made so you get the same kind of
issue that you get with solution dies
here so there are a few services that
would benefit from more more chain
transaction yeah also like memo cash
they’re saying like that to do all these
workarounds to be able to send more than
25 messages per block you know per user
so no they worked around it but that’s
kind of painful know if we remove child
pays for parent the raising raising this
limit I think becomes significantly
easier yeah but careful parent is is
somewhat I will useful feature but right
now is anyone using it yeah kind of
limited doesn’t habits example I think
the merge it like last year now without
we don’t okay well I think that limiting
shouldn’t pay for parent to some kind
some death like I know or not going back
to aunt level of ancestry like only the
parent we could limiting the land of the
children pay for foreign transaction
change we could in in the me in the time
that we are going to rework the code
that that go through all the the
confirming chain could be a temporary
solution to have a significant
increasing in net reduction
yeah there are two problems one is
strength upon the change
parents effectively we can solve that
one by just limiting the deaths which we
do check the for current to one or two
or something small
the other one is c-cups accounting and
size accounting for blood constriction
and this one require essentially what we
know we did but this is one more reason
why were you require to overhaul the way
we do complete construction so if we’re
able to limit the child pace repair
relatively easily when it makes sense to
coordinate for this fork bumping the
change transaction limit up to fifty
essentially doubling it without having
the performance in part I guess that if
we’ve for bu there’s no process we don’t
have to the favor and so we are okay we
even tested it with very long chain of a
contraction in the gigan block testing
testing that initiative and there were
no so there were not among that the
bottlenecks that we hit so we are ok
with it but for other implementation
that have shouldn’t I guess that once
you measure that there’s no there’s no
performance impact once you put in place
a constraint on the length of juniper
for parent and rising the the at the
same time that the length of a
contraction in the mantel 250 why not
so that we kind of calm and Satoshi dice
will have the the warning the error and
then whatever complains that comes
through at 24 or 25 winning strikes
rather than 12 so that the number of of
tickets open it would be decrease
why not if this is the case check for
parent is not gonna be the only issue
the whole accounting of cigars in size
for block construction get blocked
complete it’s it’s gonna be an awesome
one it’s require also arbitrary
traversal of graph when you add the real
transaction from the graph okay so what
about gathering about a bunch of data
also maybe reassuring the study that
Jason was mentioning before to see if we
could have a reasonable increase without
hurting any other part of the system
like if we achieve we said something
like forty or fifty without impacting
get blocked late or should I pay for
padding or whatever why not let’s just
measure in it and then them and then
decide if we have something like under
five percent two percent or even zero
percent we’re not doing it it’s not it’s
not the final solution but it could be a
stopgap issue proven yeah a band-aid for
for the problem that we are facing yes
wouldn’t be against it I mean it’s not
really a technical argument but like all
those people that run into that problem
we depreciate the app this is this is
open source like you know yeah this is
operational if someone care about some
problem they need to be helping us we
cannot be you know just I guess that I
guess that the measuring of the wish of
the the proposal change in the default
policy like having an ABC client setting
the default to fifty thousand and twenty
five for the ancestors and for the
children length of the of the chain and
measuring the impact on get Brock
template RPC and what whatever other
so you want to make sure that you also
want to generate input that are
purposeful your style right because you
don’t want someone to be able to bring
down the network by just you know
sending a graph of transaction that just
sent the software is you you know it’s
you doing crazy amount of computation
but isn’t something that someone could
destroy it already if there is some pool
that use wonderful value for this
parameter no because this icon thing
happen in the mem tool so you could
receive blocks with deeper deeper chain
of transaction that not be a major issue
except for block propagation because you
might have it in the mem fool yeah so so
like right now any pool can you hear it
any way they want in mind a hundred
transactions if they want to but what we
discovered is that if you have nodes
with different rule sets then they may
get out of sync yeah what yeah but your
nodes have valid you take so that other
nodes doesn’t have so your node is
trying to send transactions and forward
transactions that are seen as invalid by
other nodes
okay so actually is a policy that is
more consensus related that we thought
before like not really consensus is more
like you were known might end up with
tranny it doesn’t exist in other notes
until it’s included in a walk okay so so
the matter is that we are going to if
there is different settings in for these
parameters in the minors and the in the
next word that means that block
propagation a transaction propagation
will will be hid and so that at the end
of the day there are the red we will
have higher block propagation times
right yes so everyone has to change the
same time that’s the that’s the only way
to fix this and the only way to do that
is they’re activated at a hard work time
and yeah so it seems like we have
there’s it’s not that easy
maybe it’s not that easy just raise the
limit even though that would be helpful
now just erase it a little but yeah like
like like everything in the Satoshi in
every sort of the clients everything
else needs to be rewritten and fixed
yeah so I think we have a bit of a
cultural issue generally with a VC edge
here because everybody knows that these
things need to happen but nobody is
stepping to the plate the previous
change to happen for op returned for
instance we had we had to finish it
right because we received some patches
but they were not they were not
high-quality ma enough and we had to
finish the mercifulness been the same
for many other changes the thing is
people who care people who care about
some change they need to be stepping up
to the plate and making that happen
because those change they don’t happen
magically they’re not material I should
have seen err if if nobody is doing it
then it’s not gonna happen
so in this case for it for this to
happen we need to refactor worker yeah
and we’ve known that this could needs to
be refactored for like forever so we it
it’s a big chunk but if we don’t start
we never finish so so like like it was
discussed earlier already if we could do
a stopgap to limit the depth of child
pays for parent and then kind of defer
some of those larger reef actors would
that be acceptable yeah though we need
to have very good numbers especially
adversely fruit right you know not just
what happened to the CPUs we just
changed the config but Brian is someone
tries to exploit it so under a different
who did the first changed change action
research into you I think was and just
um okay and maybe we could reach out to
him and see if you’d be willing to kind
of extend his research on them yeah look
good I could ask him are there any other
items to consider before we move on to
some questions from the audience
nope okay I’m gonna send out the
question to the panelists I’ll read it
to I cannot pronounce the gentleman’s
name ro i J i kk you he says i have a
preliminary merkel eyes radix tree
implementation the design is primarily a
flat file Merkle tree wherein we store
the tree in a series of append-only
files I’d like to know if mer clicks
still going forward if not for me
sometime thereafter
yes certainly not made oh but it’s
stealing the plant okay anyone else like
to comment on the question I guess the
only thing is he says he has an
implementation going
I would encourage him to get in touch
with developers and you know get some
preliminary review start writing tests
for it you know it was the only way that
the development on this moves forward is
like home reset by people stepping up
right Jason you’re you were available
somebody wants to send an email get the
conversation started yeah absolutely
okay and if I remember correctly you are
Jason cogs Jason B Cox at Bitcoin ABC
org say it again please Jason B Cox at
Bitcoin ABC gorg you can find our emails
on the side right okay just one where’s
this through get out you can find our
emails that way okay I trust that that
will get the ball rolling for the show
do we have any other questions from
anyone in the audience there are still
six people that are attending as
participants and if you do have any
questions please forward them it looks
like not do you guys have any any
further conclusions that you’d like to
share before we end the meeting well I
can ask the question and do you feel
this has been a productive meeting yeah
yeah yeah
yeah I guess so it’s gonna be my comment
is like I think it’s good just to even
kind of raise our own awareness and
maybe other people’s awareness about the
status of things and
you know what happening though yeah I
think it’s been useful we have there’s a
number of things that will need to be
put into a time line coming up to the
upgrade in May and so it is our
intention to facilitate as many meetings
as is necessary top of my head right now
I’m taking every two weeks similar to
what we did prior to the fork in
November so if you guys have any
comments on that or anybody in the
audience has any comments please send
them along I will be processing the
video from this college day in the next
couple of days and hopefully get it up
on YouTube and variety of other sources
so if you have a look and see some of
the people that are working behind the
scenes on their coin cash so anyone else
have any further questions or comments
okay thank you very much for attending
look forward to chatting with you all
again very soon and thanks to the
attendees as well for being here and
I’ll bid you a fond farewell XPrize