You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Guilherme — Yesterday at 4:34 PM
Today I managed to compile the Solidity bounty down to a .tar.xz file, but unfortunately was not able to submit it to BugLess. I did some surgery on the front-end code go so that I could see what error that was being returned by wagmi, and it states that there was an HTTP request failure. Do note that this only happens when I try to submit the Solidity bounty. The Lua bounty, for example, works just fine. My theory is that, because the .tar.xz file is 3,6 MB big, the input is too large to be submitted to L1, in terms of gas costs. As comparison, the BusyBox bounty is around 30 KB, the Lua bounty is around 90 KB, and the SQLite bounty is around 300 KB. Maybe we should look for smaller programs?
Guilherme — Yesterday at 7:11 PM
Do not that this is not a limitation of BugLess, but a limitation of the base layer. If we could access other data sources, such as Espresso, then maybe it would be feasible to submit larger programs.
Cláudio — Yesterday at 7:53 PM
Hi Gui,
what if we submit the bounty in a multy-part compressed file to be reconstucted inside the Cartesi Machine?
We could do a experiment with artificial files of different sizes to know the maximum allowed size. Bounties bigger than this threshold should be compressed in multi-part mode.
We used this approach when processing images with OpenCV inside the cartesi machine. We splited the original image in smaller parts, submitted all parts and then reconstructed the original image inside the Cartesi Machine (cc. @marcus Souza ).
Maybe @carlo also addressed this limitation with cartridges size in Rives.
Cheers!
Marcus Souza — Yesterday at 8:06 PM
Hey 👏🏼 ,
Yes, in my case, I had to reconstruct the images, as Claudio mentioned. Seeing the above text:
As comparison, the BusyBox bounty is around 30 KB
This is the most similar to my case in terms of size, because i was working with very small images (100x100 pixels the biggest ones). So, the approach was to divide this in the frontend part and send the images as chunks in some transactions ( exploring the maximum size of a transaction payload, this size fits in 4 transactions). Technically, it is possible but it has some drawbacks.
If we could access other data sources, such as Espresso, then maybe it would be feasible to submit larger programs.
I think this can also improve the UX, since sending the transactions sequentially isn't that good in this aspect.
Maybe @carlo also addressed this limitation with cartridges size in Rives.
Probably Carlo's solution is smarter then mine for this, so it is good to take a look 😉
Carlo — Yesterday at 8:13 PM
Yeah, we do this for the upload of the games as well
Guilherme — Yesterday at 10:52 PM
Thanks for the input, guys! 🙏
Is there some openly available source code that I could take a look to take inspiration from?
edubart — Yesterday at 11:59 PM
The idea is simple, split the file in chunks, submit each chunk as inputs, reassemble them after all chunks are available. Although you have to consider what happens if not all chunks are submitted, and design in a way where you are sure chunks will not mix with others in case two bounties are being submitted at the same time.
edubart — Today at 12:06 AM
You have to consider the cost to upload all this on L1, I think 3.6MB may be very expansive on Ethereum at least, maybe its not the case in other blockchains. In case of very large bounties maybe having its own rollups would be better, but I think this is a complete rework of bugless. Or maybe in the future we could dehash bounties from a cheap data availability layer.
tuler.eth — Today at 12:11 AM
Using a L2 won’t help much with cost I think. Because it still have to post to L1. It’s an incentive to research more about other DAs.
edubart — Today at 12:17 AM
Also bounties that need to be on bugless from day zero, don't really need to be inserted onchain in day zero, this would be a waste of money. The dapp could start with some initial bounties baked in, we are doing this for Rives for example, where we do support uploading games, but large games like Doom is baked in from day 0 to save costs.
edubart — Today at 12:26 AM
Another idea I had is whoever wants to create a bounty, make a PR to a bounties repository, then with a DAO or something we do a upgrade of the bonties flash drive. This way bounties are created offchain, while there is some governance on chain to make the machine upgrades, this also a very different design, and requires machine upgrades and some coordinance of the nodes to make the upgrade.
Cláudio — Today at 11:41 AM
Hi Gui, given all that was said, IMO we should move to prepare a drive for our machine with the Solidity bounty built-in.
With that we could have a first alpha launch with something interesting and really relevant for the community to explore while we think about the feature we want to offer for bigger bounties in the future.
Also, maybe it is a good time for us to have a call and align all this together. What do you guys think about it?
gligneul.eth — Today at 12:02 PM
One thing I considered was deploying a new rollup for each bounty, like the dapp sharding idea. This would circumvent the base layer limit issue.
gligneul.eth — Today at 12:06 PM
This would require a major refactoring on the dapp though; so for this first version it might be better to just embed a particular bounty into the dapp snapshot.
You can take a look on how biometrics dealed with that time. In the front end, we had this helper that set the maximum size of a chunk and divided the image string to fit the transaction payload. The backend also had treatments to flag if the chunk is the final one or not through notices.
The text was updated successfully, but these errors were encountered:
Guilherme — Yesterday at 4:34 PM
Today I managed to compile the Solidity bounty down to a .tar.xz file, but unfortunately was not able to submit it to BugLess. I did some surgery on the front-end code go so that I could see what error that was being returned by wagmi, and it states that there was an HTTP request failure. Do note that this only happens when I try to submit the Solidity bounty. The Lua bounty, for example, works just fine. My theory is that, because the .tar.xz file is 3,6 MB big, the input is too large to be submitted to L1, in terms of gas costs. As comparison, the BusyBox bounty is around 30 KB, the Lua bounty is around 90 KB, and the SQLite bounty is around 300 KB. Maybe we should look for smaller programs?
Guilherme — Yesterday at 7:11 PM
Do not that this is not a limitation of BugLess, but a limitation of the base layer. If we could access other data sources, such as Espresso, then maybe it would be feasible to submit larger programs.
Cláudio — Yesterday at 7:53 PM
Hi Gui,
what if we submit the bounty in a multy-part compressed file to be reconstucted inside the Cartesi Machine?
We could do a experiment with artificial files of different sizes to know the maximum allowed size. Bounties bigger than this threshold should be compressed in multi-part mode.
We used this approach when processing images with OpenCV inside the cartesi machine. We splited the original image in smaller parts, submitted all parts and then reconstructed the original image inside the Cartesi Machine (cc. @marcus Souza ).
Maybe @carlo also addressed this limitation with cartridges size in Rives.
Cheers!
Marcus Souza — Yesterday at 8:06 PM
Hey 👏🏼 ,
Yes, in my case, I had to reconstruct the images, as Claudio mentioned. Seeing the above text:
As comparison, the BusyBox bounty is around 30 KB
This is the most similar to my case in terms of size, because i was working with very small images (100x100 pixels the biggest ones). So, the approach was to divide this in the frontend part and send the images as chunks in some transactions ( exploring the maximum size of a transaction payload, this size fits in 4 transactions). Technically, it is possible but it has some drawbacks.
If we could access other data sources, such as Espresso, then maybe it would be feasible to submit larger programs.
I think this can also improve the UX, since sending the transactions sequentially isn't that good in this aspect.
Maybe @carlo also addressed this limitation with cartridges size in Rives.
Probably Carlo's solution is smarter then mine for this, so it is good to take a look 😉
Carlo — Yesterday at 8:13 PM
Yeah, we do this for the upload of the games as well
Guilherme — Yesterday at 10:52 PM
Thanks for the input, guys! 🙏
Is there some openly available source code that I could take a look to take inspiration from?
edubart — Yesterday at 11:59 PM
The idea is simple, split the file in chunks, submit each chunk as inputs, reassemble them after all chunks are available. Although you have to consider what happens if not all chunks are submitted, and design in a way where you are sure chunks will not mix with others in case two bounties are being submitted at the same time.
edubart — Today at 12:06 AM
You have to consider the cost to upload all this on L1, I think 3.6MB may be very expansive on Ethereum at least, maybe its not the case in other blockchains. In case of very large bounties maybe having its own rollups would be better, but I think this is a complete rework of bugless. Or maybe in the future we could dehash bounties from a cheap data availability layer.
tuler.eth — Today at 12:11 AM
Using a L2 won’t help much with cost I think. Because it still have to post to L1. It’s an incentive to research more about other DAs.
edubart — Today at 12:17 AM
Also bounties that need to be on bugless from day zero, don't really need to be inserted onchain in day zero, this would be a waste of money. The dapp could start with some initial bounties baked in, we are doing this for Rives for example, where we do support uploading games, but large games like Doom is baked in from day 0 to save costs.
edubart — Today at 12:26 AM
Another idea I had is whoever wants to create a bounty, make a PR to a bounties repository, then with a DAO or something we do a upgrade of the bonties flash drive. This way bounties are created offchain, while there is some governance on chain to make the machine upgrades, this also a very different design, and requires machine upgrades and some coordinance of the nodes to make the upgrade.
Cláudio — Today at 11:41 AM
Hi Gui, given all that was said, IMO we should move to prepare a drive for our machine with the Solidity bounty built-in.
With that we could have a first alpha launch with something interesting and really relevant for the community to explore while we think about the feature we want to offer for bigger bounties in the future.
Also, maybe it is a good time for us to have a call and align all this together. What do you guys think about it?
gligneul.eth — Today at 12:02 PM
One thing I considered was deploying a new rollup for each bounty, like the dapp sharding idea. This would circumvent the base layer limit issue.
gligneul.eth — Today at 12:06 PM
This would require a major refactoring on the dapp though; so for this first version it might be better to just embed a particular bounty into the dapp snapshot.
Marcus Souza — Today at 12:25 PM
https://github.com/souzavinny/rollups-examples/blob/main/frontend-biometrics/src/view/layout/home/helpers/send-input.helpers.ts
You can take a look on how biometrics dealed with that time. In the front end, we had this helper that set the maximum size of a chunk and divided the image string to fit the transaction payload. The backend also had treatments to flag if the chunk is the final one or not through notices.
The text was updated successfully, but these errors were encountered: