Is Bitcoin Mining Worth It in 2020? - The Washington Note
Is Bitcoin Mining Worth It in 2020? - The Washington Note
How Long Does It Take To Get 1 Bitcoin in 2020? Zipmex
7 Reasons Bitcoin Mining is Profitable and Worth It (2020)
Estimated Electricity Cost Of Mining One Bitcoin By Country
Is it worth Mining Bitcoin in 2020? - NorseCorp
01-13 14:53 - 'First of all, that article was written with the agenda and bias of supporting Ripple XRP. It's worth noting that we have much bigger contributing factors to global warming than bitcoin mining. / Also consider the energy costs...' by /u/cryptogrip removed from /r/Bitcoin within 1-11min
''' First of all, that article was written with the agenda and bias of supporting Ripple XRP. It's worth noting that we have much bigger contributing factors to global warming than bitcoin mining. Also consider the energy costs of using any other mechanism for keeping track of assets, for reconciling them, for ensuring that books are well-ordered, running a cash-based economy, printing money, handling cash, pulling banknotes out of circulation and printing new ones, etc. But to address your point there are potential solutions. One would be to recover the waste heat for heating homes and businesses. There are efforts to heat homes with exhaust from data centers for example. ''' Context Link Go1dfish undelete link unreddit undelete link Author: cryptogrip
Why I’m Bullish on Yield Farming Ahead of the Eth 2.0 Launch
Hello everyone! I noticed that the hype around yield farming and DEX protocols kinda died down and that people focus more on NFTs and artwork-based projects like Rarible. I figured it would be great to (shortly) explain why yield farming lost its popularity and why they will have a comeback ahead of the new ETH 2.0 launch. If you’re not new here, you know how the DeFi market evolved in the past months. We had a surge of yield farming (liquidity providing) platforms that were hyped at the very beginning but lost a majority of their users real fast, sometimes only days after launching. I believe that most people were disappointed by this sort of mini speculative bubble and the fact that most projects had devs who rug pulled. Combined with the fact that Ethereum had high network congestion at several points in September and October, traders simply decided to prevent further losses and leave this niche place LP once and for all. Don’t get me wrong, there are still plenty of yield farming projects that people use and it’s not like people stopped token swapping on Uniswap or anything. Ethereum also calmed down a lot now and the average transaction costs only like what, 80 gwei? But still, I think that people are pretty much aware that if another hype cycle started, the very same pattern would repeat again. My take on this is that yield farming will regain its popularity in December around the time Ethereum 2.0 launches with its first phase and a lot of scaling solutions like Optimistic launch. If everything runs smoothly, we should have the building blocks for resuming the DeFi bull run and turning yield farming stable, rewarding, and popular once more. Sure, Ethereum is only launching a small network upgrade that will run side-by-side with the original network, so we won’t see any technical changes anytime soon. But I really believe that ETH 2.0, along with other scaling solutions, will bring back trust and show that there is indeed a bright future for blockchain-based technology ahead of us. And in that future, Proof-of-Stake and liquidity providing will be the modern mining equivalent of running a Bitcoin farm in 2011. One thing that I’m worried about is that enthusiasts, traders, and investors will still fall for the same projects that promise too much and deliver little. We saw numerous projects that were regarded as reputable in the beginning collapse within a week, like SushiSwap. But at the same time, my line of thinking is that projects that focus on development and spend minimal time on marketing will surface to the top in the end. For example, while everyone was using Uniswap to swap tokens and provide liquidity, I was doing the same exact thing but cheaper on Anyswap. It is kinda funny since people boast that they earned $1200 through the UNI airdrop but I know for a fact that they spent way more on fees. And guess what? I didn’t even break a $100 threshold in the last three months while using Anyswap. I’m not trying to bash Uniswap here, but all I’m saying is that we already have scalable solutions now but people are too scared to introduce new changes in their lives. I’m not here to market you anything. I just want to show you that even today, in October 2020, you can discover scalable and rewarding projects that simply work. Find any developer team that works all the time and doesn’t have the time to brag and you’ll know you’re on the right road! Last time I checked, the Anyswap team revealed that the average APY return for their yield farming pools ranges between 100% to 900%. When I asked my crypto friends if they know about this, I found that none of them even heard of Anyswap. DYOR and find out about the project on your own. I promise that reading about Anyswap and the blockchain it’s based on (Fusion) will be worth the time.
Why Osana takes so long? (Programmer's point of view on current situation)
I decided to write a comment about «Why Osana takes so long?» somewhere and what can be done to shorten this time. It turned into a long essay. Here's TL;DR of it:
The cost of never paying down this technical debt is clear; eventually the cost to deliver functionality will become so slow that it is easy for a well-designed competitive software product to overtake the badly-designed software in terms of features. In my experience, badly designed software can also lead to a more stressed engineering workforce, in turn leading higher staff churn (which in turn affects costs and productivity when delivering features). Additionally, due to the complexity in a given codebase, the ability to accurately estimate work will also disappear. Junade Ali, Mastering PHP Design Patterns (2016)
Longer version: I am not sure if people here wanted an explanation from a real developer who works with C and with relatively large projects, but I am going to do it nonetheless. I am not much interested in Yandere Simulator nor in this genre in general, but this particular development has a lot to learn from for any fellow programmers and software engineers to ensure that they'll never end up in Alex's situation, especially considering that he is definitely not the first one to got himself knee-deep in the development hell (do you remember Star Citizen?) and he is definitely not the last one. On the one hand, people see that Alex works incredibly slowly, equivalent of, like, one hour per day, comparing it with, say, Papers, Please, the game that was developed in nine months from start to finish by one guy. On the other hand, Alex himself most likely thinks that he works until complete exhaustion each day. In fact, I highly suspect that both those sentences are correct! Because of the mistakes made during early development stages, which are highly unlikely to be fixed due to the pressure put on the developer right now and due to his overall approach to coding, cost to add any relatively large feature (e.g. Osana) can be pretty much comparable to the cost of creating a fan game from start to finish. Trust me, I've seen his leaked source code (don't tell anybody about that) and I know what I am talking about. The largest problem in Yandere Simulator right now is its super slow development. So, without further ado, let's talk about how «implementing the low hanging fruit» crippled the development and, more importantly, what would have been an ideal course of action from my point of view to get out. I'll try to explain things in the easiest terms possible.
else if's and lack any sort of refactoring in general
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. Antoine de Saint-Exupéry
This is why refactoring — activity of rewriting your old code so it does the same thing, but does it quicker, in a more generic way, in less lines or simpler — is so powerful. In my experience, you can only keep one module/class/whatever in your brain if it does not exceed ~1000 lines, maybe ~1500. Splitting 17000-line-long class into smaller classes probably won't improve performance at all, but it will make working with parts of this class way easier. Is it too late now to start refactoring? Of course NO: better late than never.
If you think that you wrote this code, so you'll always easily remember it, I have some bad news for you: you won't. In my experience, one week and that's it. That's why comments are so crucial. It is not necessary to put a ton of comments everywhere, but just a general idea will help you out in the future. Even if you think that It Just Works™ and you'll never ever need to fix it. Time spent to write and debug one line of code almost always exceeds time to write one comment in large-scale projects. Moreover, the best code is the code that is self-evident. In the example above, what the hell does (float) 6 mean? Why not wrap it around into the constant with a good, self-descriptive name? Again, it won't affect performance, since C# compiler is smart enough to silently remove this constant from the real code and place its value into the method invocation directly. Such constants are here for you. I rewrote my code above a little bit to illustrate this. With those comments, you don't have to remember your code at all, since its functionality is outlined in two tiny lines of comments above it. Moreover, even a person with zero knowledge in programming will figure out the purpose of this code. It took me less than half a minute to write those comments, but it'll probably save me quite a lot of time of figuring out «what was I thinking back then» one day. Is it too late now to start adding comments? Again, of course NO. Don't be lazy and redirect all your typing from «debunk» page (which pretty much does the opposite of debunking, but who am I to judge you here?) into some useful comments.
This is often neglected, but consider the following. You wrote some code, you ran your game, you saw a new bug. Was it introduced right now? Is it a problem in your older code which has shown up just because you have never actually used it until now? Where should you search for it? You have no idea, and you have one painful debugging session ahead. Just imagine how easier it would be if you've had some routines which automatically execute after each build and check that environment is still sane and nothing broke on a fundamental level. This is called unit testing, and yes, unit tests won't be able to catch all your bugs, but even getting 20% of bugs identified at the earlier stage is a huge boon to development speed. Is it too late now to start adding unit tests? Kinda YES and NO at the same time. Unit testing works best if it covers the majority of project's code. On the other side, a journey of a thousand miles begins with a single step. If you decide to start refactoring your code, writing a unit test before refactoring will help you to prove to yourself that you have not broken anything without the need of running the game at all.
This is basically pretty self-explanatory. You set this thing once, you forget about it. Static code analyzer is another «free estate» to speed up the development process by finding tiny little errors, mostly silly typos (do you think that you are good enough in finding them? Well, good luck catching x << 4; in place of x <<= 4; buried deep in C code by eye!). Again, this is not a silver bullet, it is another tool which will help you out with debugging a little bit along with the debugger, unit tests and other things. You need every little bit of help here. Is it too late now to hook up static code analyzer? Obviously NO.
Say, you want to build Osana, but then you decided to implement some feature, e.g. Snap Mode. By doing this you have maybe made your game a little bit better, but what you have just essentially done is complicated your life, because now you should also write Osana code for Snap Mode. The way game architecture is done right now, easter eggs code is deeply interleaved with game logic, which leads to code «spaghettifying», which in turn slows down the addition of new features, because one has to consider how this feature would work alongside each and every old feature and easter egg. Even if it is just gazing over one line per easter egg, it adds up to the mess, slowly but surely. A lot of people mention that developer should have been doing it in object-oritented way. However, there is no silver bullet in programming. It does not matter that much if you are doing it object-oriented way or usual procedural way; you can theoretically write, say, AI routines on functional (e.g. LISP)) or even logical language if you are brave enough (e.g. Prolog). You can even invent your own tiny programming language! The only thing that matters is code quality and avoiding the so-called shotgun surgery situation, which plagues Yandere Simulator from top to bottom right now. Is there a way of adding a new feature without interfering with your older code (e.g. by creating a child class which will encapsulate all the things you need, for example)? Go for it, this feature is basically «free» for you. Otherwise you'd better think twice before doing this, because you are going into the «technical debt» territory, borrowing your time from the future by saying «I'll maybe optimize it later» and «a thousand more lines probably won't slow me down in the future that much, right?». Technical debt will incur interest on its own that you'll have to pay. Basically, the entire situation around Osana right now is just a huge tale about how just «interest» incurred by technical debt can control the entire project, like the tail wiggling the dog. I won't elaborate here further, since it'll take me an even larger post to fully describe what's wrong about Yandere Simulator's code architecture. Is it too late to rebuild code architecture? Sadly, YES, although it should be possible to split Student class into descendants by using hooks for individual students. However, code architecture can be improved by a vast margin if you start removing easter eggs and features like Snap Mode that currently bloat Yandere Simulator. I know it is going to be painful, but it is the only way to improve code quality here and now. This will simplify the code, and this will make it easier for you to add the «real» features, like Osana or whatever you'd like to accomplish. If you'll ever want them back, you can track them down in Git history and re-implement them one by one, hopefully without performing the shotgun surgery this time.
Again, I won't be talking about the performance, since you can debug your game on 20 FPS as well as on 60 FPS, but this is a very different story. Yandere Simulator is huge. Once you fixed a bug, you want to test it, right? And your workflow right now probably looks like this:
Fix the code (unavoidable time loss)
Rebuild the project (can take a loooong time)
Load your game (can take a loooong time)
Test it (unavoidable time loss, unless another bug has popped up via unit testing, code analyzer etc.)
And you can fix it. For instance, I know that Yandere Simulator makes all the students' photos during loading. Why should that be done there? Why not either move it to project building stage by adding build hook so Unity does that for you during full project rebuild, or, even better, why not disable it completely or replace with «PLACEHOLDER» text for debug builds? Each second spent watching the loading screen will be rightfully interpreted as «son is not coding» by the community. Is it too late to reduce loading times? Hell NO.
Or any other continuous integration tool. «Rebuild a project» can take a long time too, and what can we do about that? Let me give you an idea. Buy a new PC. Get a 32-core Threadripper, 32 GB of fastest RAM you can afford and a cool motherboard which would support all of that (of course, Ryzen/i5/Celeron/i386/Raspberry Pi is fine too, but the faster, the better). The rest is not necessary, e.g. a barely functional second hand video card burned out by bitcoin mining is fine. You set up another PC in your room. You connect it to your network. You set up ramdisk to speed things up even more. You properly set up Jenkins) on this PC. From now on, Jenkins cares about the rest: tracking your Git repository, (re)building process, large and time-consuming unit tests, invoking static code analyzer, profiling, generating reports and whatever else you can and want to hook up. More importantly, you can fix another bug while Jenkins is rebuilding the project for the previous one et cetera. In general, continuous integration is a great technology to quickly track down errors that were introduced in previous versions, attempting to avoid those kinds of bug hunting sessions. I am highly unsure if continuous integration is needed for 10000-20000 source lines long projects, but things can be different as soon as we step into the 100k+ territory, and Yandere Simulator by now has approximately 150k+ source lines of code. I think that probably continuous integration might be well worth it for Yandere Simulator. Is it too late to add continuous integration?NO, albeit it is going to take some time and skills to set up.
Stop caring about the criticism
Stop comparing Alex to Scott Cawton. IMO Alex is very similar to the person known as SgtMarkIV, the developer of Brutal Doom, who is also a notorious edgelord who, for example, also once told somebody to kill himself, just like… However, being a horrible person, SgtMarkIV does his job. He simply does not care much about public opinion. That's the difference.
This is a follow-up on https://old.reddit.com/Bitcoin/comments/hqzp14/technical_the_path_to_taproot_activation/ Taproot! Everybody wants it!! But... you might ask yourself: sure, everybody else wants it, but why would I, sovereign Bitcoin HODLer, want it? Surely I can be better than everybody else because I swapped XXX fiat for Bitcoin unlike all those nocoiners? And it is important for you to know the reasons why you, o sovereign Bitcoiner, would want Taproot activated. After all, your nodes (or the nodes your wallets use, which if you are SPV, you hopefully can pester to your wallet vendoimplementor about) need to be upgraded in order for Taproot activation to actually succeed instead of becoming a hot sticky mess. First, let's consider some principles of Bitcoin.
You the HODLer should be the one who controls where your money goes. Your keys, your coins.
You the HODLer should be able to coordinate and make contracts with other people regarding your funds.
You the HODLer should be able to do the above without anyone watching over your shoulder and judging you.
I'm sure most of us here would agree that the above are very important principles of Bitcoin and that these are principles we would not be willing to remove. If anything, we would want those principles strengthened (especially the last one, financial privacy, which current Bitcoin is only sporadically strong with: you can get privacy, it just requires effort to do so). So, how does Taproot affect those principles?
Taproot and Your /Coins
Most HODLers probably HODL their coins in singlesig addresses. Sadly, switching to Taproot would do very little for you (it gives a mild discount at spend time, at the cost of a mild increase in fee at receive time (paid by whoever sends to you, so if it's a self-send from a P2PKH or bech32 address, you pay for this); mostly a wash). (technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash, so the Taproot output spends 12 bytes more; spending from a P2WPKH requires revealing a 32-byte public key later, which is not needed with Taproot, and Taproot signatures are about 9 bytes smaller than P2WPKH signatures, but the 32 bytes plus 9 bytes is divided by 4 because of the witness discount, so it saves about 11 bytes; mostly a wash, it increases blockweight by about 1 virtual byte, 4 weight for each Taproot-output-input, compared to P2WPKH-output-input). However, as your HODLings grow in value, you might start wondering if multisignature k-of-n setups might be better for the security of your savings. And it is in multisignature that Taproot starts to give benefits! Taproot switches to using Schnorr signing scheme. Schnorr makes key aggregation -- constructing a single public key from multiple public keys -- almost as trivial as adding numbers together. "Almost" because it involves some fairly advanced math instead of simple boring number adding, but hey when was the last time you added up your grocery list prices by hand huh? With current P2SH and P2WSH multisignature schemes, if you have a 2-of-3 setup, then to spend, you need to provide two different signatures from two different public keys. With Taproot, you can create, using special moon math, a single public key that represents your 2-of-3 setup. Then you just put two of your devices together, have them communicate to each other (this can be done airgapped, in theory, by sending QR codes: the software to do this is not even being built yet, but that's because Taproot hasn't activated yet!), and they will make a single signature to authorize any spend from your 2-of-3 address. That's 73 witness bytes -- 18.25 virtual bytes -- of signatures you save! And if you decide that your current setup with 1-of-1 P2PKH / P2WPKH addresses is just fine as-is: well, that's the whole point of a softfork: backwards-compatibility; you can receive from Taproot users just fine, and once your wallet is updated for Taproot-sending support, you can send to Taproot users just fine as well! (P2WPKH and P2WSH -- SegWit v0 -- addresses start with bc1q; Taproot -- SegWit v1 --- addresses start with bc1p, in case you wanted to know the difference; in bech32 q is 0, p is 1) Now how about HODLers who keep all, or some, of their coins on custodial services? Well, any custodial service worth its salt would be doing at least 2-of-3, or probably something even bigger, like 11-of-15. So your custodial service, if it switched to using Taproot internally, could save a lot more (imagine an 11-of-15 getting reduced from 11 signatures to just 1!), which --- we can only hope! --- should translate to lower fees and better customer service from your custodial service! So I think we can say, very accurately, that the Bitcoin principle --- that YOU are in control of your money --- can only be helped by Taproot (if you are doing multisignature), and, because P2PKH and P2WPKH remain validly-usable addresses in a Taproot future, will not be harmed by Taproot. Its benefit to this principle might be small (it mostly only benefits multisignature users) but since it has no drawbacks with this (i.e. singlesig users can continue to use P2WPKH and P2PKH still) this is still a nice, tidy win! (even singlesig users get a minor benefit, in that multisig users will now reduce their blockchain space footprint, so that fees can be kept low for everybody; so for example even if you have your single set of private keys engraved on titanium plates sealed in an airtight box stored in a safe buried in a desert protected by angry nomads riding giant sandworms because you're the frickin' Kwisatz Haderach, you still gain some benefit from Taproot) And here's the important part: if P2PKH/P2WPKH is working perfectly fine with you and you decide to never use Taproot yourself, Taproot will not affect you detrimentally. First do no harm!
Taproot and Your Contracts
No one is an island, no one lives alone. Give and you shall receive. You know: by trading with other people, you can gain expertise in some obscure little necessity of the world (and greatly increase your productivity in that little field), and then trade the products of your expertise for necessities other people have created, all of you thereby gaining gains from trade. So, contracts, which are basically enforceable agreements that facilitate trading with people who you do not personally know and therefore might not trust. Let's start with a simple example. You want to buy some gewgaws from somebody. But you don't know them personally. The seller wants the money, you want their gewgaws, but because of the lack of trust (you don't know them!! what if they're scammers??) neither of you can benefit from gains from trade. However, suppose both of you know of some entity that both of you trust. That entity can act as a trusted escrow. The entity provides you security: this enables the trade, allowing both of you to get gains from trade. In Bitcoin-land, this can be implemented as a 2-of-3 multisignature. The three signatories in the multisgnature would be you, the gewgaw seller, and the escrow. You put the payment for the gewgaws into this 2-of-3 multisignature address. Now, suppose it turns out neither of you are scammers (whaaaat!). You receive the gewgaws just fine and you're willing to pay up for them. Then you and the gewgaw seller just sign a transaction --- you and the gewgaw seller are 2, sufficient to trigger the 2-of-3 --- that spends from the 2-of-3 address to a singlesig the gewgaw seller wants (or whatever address the gewgaw seller wants). But suppose some problem arises. The seller gave you gawgews instead of gewgaws. Or you decided to keep the gewgaws but not sign the transaction to release the funds to the seller. In either case, the escrow is notified, and if it can sign with you to refund the funds back to you (if the seller was a scammer) or it can sign with the seller to forward the funds to the seller (if you were a scammer). Taproot helps with this: like mentioned above, it allows multisignature setups to produce only one signature, reducing blockchain space usage, and thus making contracts --- which require multiple people, by definition, you don't make contracts with yourself --- is made cheaper (which we hope enables more of these setups to happen for more gains from trade for everyone, also, moon and lambos). (technology-wise, it's easier to make an n-of-n than a k-of-n, making a k-of-n would require a complex setup involving a long ritual with many communication rounds between the n participants, but an n-of-n can be done trivially with some moon math. You can, however, make what is effectively a 2-of-3 by using a three-branch SCRIPT: either 2-of-2 of you and seller, OR 2-of-2 of you and escrow, OR 2-of-2 of escrow and seller. Fortunately, Taproot adds a facility to embed a SCRIPT inside a public key, so you can have a 2-of-2 Taprooted address (between you and seller) with a SCRIPT branch that can instead be spent with 2-of-2 (you + escrow) OR 2-of-2 (seller + escrow), which implements the three-branched SCRIPT above. If neither of you are scammers (hopefully the common case) then you both sign using your keys and never have to contact the escrow, since you are just using the escrow public key without coordinating with them (because n-of-n is trivial but k-of-n requires setup with communication rounds), so in the "best case" where both of you are honest traders, you also get a privacy boost, in that the escrow never learns you have been trading on gewgaws, I mean ewww, gawgews are much better than gewgaws and therefore I now judge you for being a gewgaw enthusiast, you filthy gewgawer).
Taproot and Your Contracts, Part 2: Cryptographic Boogaloo
Now suppose you want to buy some data instead of things. For example, maybe you have some closed-source software in trial mode installed, and want to pay the developer for the full version. You want to pay for an activation code. This can be done, today, by using an HTLC. The developer tells you the hash of the activation code. You pay to an HTLC, paying out to the developer if it reveals the preimage (the activation code), or refunding the money back to you after a pre-agreed timeout. If the developer claims the funds, it has to reveal the preimage, which is the activation code, and you can now activate your software. If the developer does not claim the funds by the timeout, you get refunded. And you can do that, with HTLCs, today. Of course, HTLCs do have problems:
Privacy. Everyone scraping the Bitcoin blockchain can see any HTLCs, and preimages used to claim them.
This can be mitigated by using offchain techniques so HTLCs are never published onchain in the happy case. Lightning would probably in practice be the easiest way to do this offchain. Of course, there are practical limits to what you can pay on Lightning. If you are buying something expensive, then Lightning might not be practical. For example, the "software" you are activating is really the firmware of a car, and what you are buying is not the software really but the car itself (with the activation of the car firmware being equivalent to getting the car keys).
Even offchain techniques need an onchain escape hatch in case of unresponsiveness! This means that, if something bad happens during payment, the HTLC might end up being published onchain anyway, revealing the fact that some special contract occurred.
And an HTLC that is claimed with a preimage onchain will also publicly reveal the preimage onchain. If that preimage is really the activation key of a software than it can now be pirated. If that preimage is really the activation key for your newly-bought cryptographic car --- well, not your keys, not your car!
Trust requirement. You are trusting the developer that it gives you the hash of an actual valid activation key, without any way to validate that the activation key hidden by the hash is actually valid.
Fortunately, with Schnorr (which is enabled by Taproot), we can now use the Scriptless Script constuction by Andrew Poelstra. This Scriptless Script allows a new construction, the PTLC or Pointlocked Timelocked Contract. Instead of hashes and preimages, just replace "hash" with "point" and "preimage" with "scalar". Or as you might know them: "point" is really "public key" and "scalar" is really a "private key". What a PTLC does is that, given a particular public key, the pointlocked branch can be spent only if the spender reveals the private key of the given public key to you. Another nice thing with PTLCs is that they are deniable. What appears onchain is just a single 2-of-2 signature between you and the developemanufacturer. It's like a magic trick. This signature has no special watermarks, it's a perfectly normal signature (the pledge). However, from this signature, plus some datta given to you by the developemanufacturer (known as the adaptor signature) you can derive the private key of a particular public key you both agree on (the turn). Anyone scraping the blockchain will just see signatures that look just like every other signature, and as long as nobody manages to hack you and get a copy of the adaptor signature or the private key, they cannot get the private key behind the public key (point) that the pointlocked branch needs (the prestige). (Just to be clear, the public key you are getting the private key from, is distinct from the public key that the developemanufacturer will use for its funds. The activation key is different from the developer's onchain Bitcoin key, and it is the activation key whose private key you will be learning, not the developer's/manufacturer's onchain Bitcoin key). So:
Privacy: PTLCs are private even if done onchain. Nobody else can learn what the private key behind the public key is, except you who knows the adaptor signature that when combined with the complete onchain signature lets you know what the private key of the activation key is. Somebody scraping the blockchain will not learn the same information even if all PTLCs are done onchain!
Lightning is still useful for reducing onchain use, and will also get PTLCs soon after Taproot is activated, but even if something bad happens and a PTLC has to go onchain, it doesn't reveal anything!
Trust issues can be proven more easily with a public-private keypair than with a hash-preimage pair.
For example, the developer of the software you are buying could provide a signature signing a message saying "unlock access to the full version for 1 day". You can check if feeding this message and signature to the program will indeed unlock full-version access for 1 day. Then you can check if the signature is valid for the purported pubkey whose private key you will pay for. If so, you can now believe that getting the private key (by paying for it in a PTLC) would let you generate any number of "unlock access to the full version for 1 day" message+signatures, which is equivalent to getting full access to the software indefinitely.
For the car, the manufacturer can show that signing a message "start the engine" and feeding the signature to the car's fimrware will indeed start the engine, and maybe even let you have a small test drive. You can then check if the signature is valid for the purported pubkey whose privkey you will pay for. If so, you can now believe that gaining knowledge of the privkey will let you start the car engine at any time you want.
(pedantry: the signatures need to be unique else they could be replayed, this can be done with a challenge-response sequence for the car, where the car gathers entropy somehow (it's a car, it probably has a bunch of sensors nowadays so it can get entropy for free) and uses the gathered entropy to challenge you to sign a random number and only start if you are able to sign the random number; for the software, it could record previous signatures somewhere in the developer's cloud server and refuse to run if you try to replay a previously-seen signature.)
Taproot lets PTLCs exist onchain because they enable Schnorr, which is a requirement of PTLCs / Scriptless Script. (technology-wise, take note that Scriptless Script works only for the "pointlocked" branch of the contract; you need normal Script, or a pre-signed nLockTimed transaction, for the "timelocked" branch. Since Taproot can embed a script, you can have the Taproot pubkey be a 2-of-2 to implement the Scriptless Script "pointlocked" branch, then have a hidden script that lets you recover the funds with an OP_CHECKLOCKTIMEVERIFY after the timeout if the seller does not claim the funds.)
Now if you were really paying attention, you might have noticed this parenthetical:
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash...)
So wait, Taproot uses raw 32-byte public keys, and not public key hashes? Isn't that more quantum-vulnerable?? Well, in theory yes. In practice, they probably are not. It's not that hashes can be broken by quantum computes --- they're still not. Instead, you have to look at how you spend from a P2WPKH/P2PKH pay-to-public-key-hash. When you spend from a P2PKH / P2WPKH, you have to reveal the public key. Then Bitcoin hashes it and checks if this matches with the public-key-hash, and only then actually validates the signature for that public key. So an unconfirmed transaction, floating in the mempools of nodes globally, will show, in plain sight for everyone to see, your public key. (public keys should be public, that's why they're called public keys, LOL) And if quantum computers are fast enough to be of concern, then they are probably fast enough that, in the several minutes to several hours from broadcast to confirmation, they have already cracked the public key that is openly broadcast with your transaction. The owner of the quantum computer can now replace your unconfirmed transaction with one that pays the funds to itself. Even if you did not opt-in RBF, miners are still incentivized to support RBF on RBF-disabled transactions. So the extra hash is not as significant a protection against quantum computers as you might think. Instead, the extra hash-and-compare needed is just extra validation effort. Further, if you have ever, in the past, spent from the address, then there exists already a transaction indelibly stored on the blockchain, openly displaying the public key from which quantum computers can derive the private key. So those are still vulnerable to quantum computers. For the most part, the cryptographers behind Taproot (and Bitcoin Core) are of the opinion that quantum computers capable of cracking Bitcoin pubkeys are unlikely to appear within a decade or two.
Current quantum computers can barely crack prime factorization problem for primes of 5 bits.
The 256-bit elliptic curve use by Bitcoin is, by my (possibly wrong) understanding, equivalent to 4096-bit primes, so you can see a pretty big gap between now (5 bit primes) and what is needed (4096 bit primes).
A lot of financial non-Bitcoin systems use the equivalent of 3072-bit primes or less, and are probably easier targets to crack than the equivalent-to-4096-bit-primes Bitcoin.
Quantum computers capable of cracking Bitcoin are still far off.
Pay-to-public-key-hash is not as protective as you might think.
We will probably see banks get cracked before Bitcoin, so the banking system is a useful canary-in-a-coal-mine to see whether we should panic about being quantum vulnerable.
For now, the homomorphic and linear properties of elliptic curve cryptography provide a lot of benefits --- particularly the linearity property is what enables Scriptless Script and simple multisignature (i.e. multisignatures that are just 1 signature onchain). So it might be a good idea to take advantage of them now while we are still fairly safe against quantum computers. It seems likely that quantum-safe signature schemes are nonlinear (thus losing these advantages).
If you are a singlesig HODL-only Bitcoin user, Taproot will not affect you positively or negatively. Importantly: Taproot does no harm!
If you use or intend to use multisig, Taproot will be a positive for you.
If you transact onchain regularly using typical P2PKH/P2WPKH addresses, you get a minor reduction in feerates since multisig users will likely switch to Taproot to get smaller tx sizes, freeing up blockspace for yours.
If you are using multiparticipant setups for special systems of trade, Taproot will be a positive for you.
Remember: Lightning channels are multipartiicpiant setups for special systems of lightning-fast offchain trades!
I Wanna Be The Taprooter!
So, do you want to help activate Taproot? Here's what you, mister sovereign Bitcoin HODLer, can do!
If you have developer experience especially in C, C++, or related languages
Review the Taproot code! There is one pull request in Bitcoin Core, and one in libsecp256k1. I deliberately am not putting links here, to avoid brigades of nontechnical but enthusiastic people leaving pointless reviews, but if you are qualified you know how to find them!
But I am not a cryptographeBitcoin Core contributomathematician/someone as awesome as Pieter Wuille
That's perfectly fine! The cryptographers have been over the code already and agree the math is right and the implementation is right. What is wanted is the dreary dreary dreary software engineering: are the comments comprehensive and understandable? no misspellings in the comments? variable names understandable? reasonable function naming convention? misleading coding style? off-by-one errors in loops? conditions not covered by tests? accidental mixups of variables with the same types? missing frees? read-before-init? better test coverage of suspicious-looking code? missing or mismatching header guards? portability issues? consistent coding style? you know, stuff any coder with a few years of experience in coding anything might be able to catch. With enough eyes all bugs are shallow!
If you are running a mining pool/mining operation/exchange/custodial service/SPV server
Be prepared to upgrade!
One of the typical issues with upgrading software is that subtle incompatibilities with your current custom programs tend to arise, disrupting operations and potentially losing income due to downtime. If so, consider moving to the two-node setup suggested by gmax, which is in the last section of my previous post. With this, you have an up-to-date "public" node and a fixed-version "private" node, with the public node protecting the private node from any invalid chainsplits or invalid transactions. Moving to this setup from a typical one-node setup should be smooth and should not disrupt operations (too much).
If you are running your own fullnode for fun or for your own wallet
Be prepared to upgrade! The more nodes validating the new rules (even if you are a non-mining node!), the safer every softfork will be!
If you are using an SPV wallet or custodial wallet/service (including hardware wallets using the software of the wallet provider)
Contact your wallet provider / SPV server and ask for a statement on whether they support Taproot, and whether they are prepared to upgrade for Taproot! Make it known to them that Taproot is something you want!
But I Hate Taproot!!
Raise your objections to Taproot now, or forever hold your peace! Maybe you can raise them here and some of the devs (probably nullc, he goes everywhere, even in rbtc!) might be able to see your objections! Or if your objections are very technical, head over to the appropriate pull request and object away!
Maybe you simply misunderstand something, and we can clarify it here!
Or maybe you do have a good objection, and we can make Taproot better by finding a solution for it!
If you are like me, then you are probably always looking for new ways to generate income. There are always new opportunities out there to make a quick buck, however, I try and be selective and do extensive research into the opportunities I spot. I have recently become very interested in the opportunities that Bitcoin trading presents. Increasing your streams of passive income through a diverse range of methods can start to add up to a significant amount each month. Here are a few ways to start making money through Bitcoin. Mining Bitcoin Essentially mining means using computing power to secure a network to receive Bitcoin rewards. It is the oldest form of earning passive income through Bitcoin as it doesn’t require you to have cryptocurrency holdings. In the early days, this method was a viable solution, however, as the network hash rate increase most miners shifted to using more powerful Graphics Processing Units. Due to the vast increase in competition mining became the playing field of Application-Specific Integrated Circuits (ASICs) - electronics that use mining chips tailor-made for this specific purpose. Nowadays setting up and maintaining mining equipment requires substantial investment and technical expertise – but it's worth it if you happen to fit the criteria. Not to mention the cooling costs associated with running a machine powerful enough to mine Bitcoin. Staking Staking is a less resource-intensive alternative to mining, involving keeping funds in a suitable wallet and performing various network functions to receive staking rewards (i.e. Bitcoin). Usually, staking involves establishing a staking wallet and simply holding the coins. In other cases, the process will involve a staking pool. Some exchanges will do all this for you – all you have to do it keep your tokens on the exchange and all the technical requirements will be taken care of. This is a great way to increase your Bitcoin holdings with minimal efforts. Lending Lending is a completely passive method to earn interest on your Bitcoin holdings. There are several peer-to-peer lending platforms available that enable you to lock up your funds for a period of time to later collect interest payments. The interest rate could either be set for the platform or based on the current market rate. This method is ideal for those looking for long term rewards, however, it is worth noting that locking your funds in a smart contact always carries the risk of bugs. Finding a Bitcoin Trading Company For those who are less technically inclined and don’t have a firm grasp of how Bitcoin trading works, there is always the opportunity of finding a company that will trade on your behalf. The issue with this is that there are many seedy companies who claim to do this but then end up ripping you off. In order to have peace of mind, you need to find a Bitcoin trading company that understands the market and is reputable enough. I stumbled across Mirror Trading International, a company that operates out of South Africa. What immediately stood out for me was that they were transparent and professional in their engagements. Daily profits are paid on the days where there are profits recorded. In addition to this, they have made the entire registration and withdrawal process as simple as possible. All you have to do is simply fund your account with the minimum fund value and you can start earning. If you do need to access the funds, then this is a simple process that you have full control of. I would suggest everyone to do their research and keep an open mind. The thousands of testimonials, along with their members from all across the world is testament that they are a legitimate company that is sustainable.
So some background I just bought some bitcoin at a bitcoin ATM to try it out. I put $40 in the machine and received $34.80ish worth of bitcoin. The fee at the machine was 15%. $34.80*1.15 = $40 (which I put in the machine) The machine gives you a link to blockchain explorer where you can see when your transaction becomes confirmed. Looking through the transaction, the total fee for the transaction (which I assume is the mining fee) is $6.22 So I put $40 in the machine, received $34.80 and the transaction cost $6.22 to process. $34.80+6.22= $41.02, so the machine in fact losing money on the transaction? $41.02>$40 My question is the "Fees" in blockchain explorer in fact "mining fees"? I know this price changes throughout the day, but $6.22 seems high? and would the company running the machine be actually dumb enough to lose money on the transaction? Or can they not control what price the mining fee will be? Thanks in advance :)
[OWL WATCH] Waiting for "IOTA TIME" 20; Hans's re-defined directions for DLT
Disclaimer: This is my editing, so there could be some misunderstandings... -------------------------------------------- wellwho오늘 오후 4:50 u/BenRoyce****how far is society2 from having something clickable powered by IOTA? Ben Royce오늘 오후 4:51 demo of basic tech late sep/ early oct. MVP early 2021 --------------------------------------------------- HusQy Colored coins are the most misunderstood upcoming feature of the IOTA protocol. A lot of people see them just as a competitor to ERC-20 tokens on ETH and therefore a way of tokenizing things on IOTA, but they are much more important because they enable "consensus on data". Bob All this stuff already works on neblio but decentralized and scaling to 3500 tps HusQy Neblio has 8 mb blocks with 30 seconds blocktime.This is a throughput of 8 mb / 30 seconds = 267 kb per second.Transactions are 401+ bytes which means that throughput is 267 kb / 401 bytes = 665 TPS. IOTA is faster, feeless and will get even faster with the next update ... ----------------------------------------------------------------------------- HusQy Which DLT would be more secure? One that is collaboratively validated by the economic actors of the world (coporations, companies, foundations, states, people) or one that is validated by an anonymous group of wealthy crypto holders? HusQy The problem with current DLTs is that we use protection mechanisms like Proof of Work and Proof of Stake that are inherently hard to shard. The more shards you have, the more you have to distribute your hashing power and your stake and the less secure the system becomes. HusQy Real world identities (i.e. all the big economic actors) however could shard into as many shards as necessary without making the system less secure. Todays DLTs waste trust in the same way as PoW wastes energy. HusQy Is a secure money worth anything if you can't trust the economic actors that you would buy stuff from? If you buy a car from Volkswagen and they just beat you up and throw you out of the shop after you payed then a secure money won't be useful either :P HusQy **I believe that if you want to make DLT work and be successful then we need to ultimately incorporate things like trust in entities into the technology.**Examples likes wirecard show that trusting a single company is problematic buttrusting the economy as a whole should be at ... **... least as secure as todays DLTs.**And as soon as you add sharding it will be orders of magnitude more secure.DLT has failed to deliver because people have tried to build a system in vacuum that completely ignores things that already exist and that you can leverage on. ---------------------------------------------------------------------------------- HusQy Blockchain is a bit like people sitting in a room, trying to communicate through BINGO sheets. While they talk, they write down some of the things that have been said and as soon as one screams BINGO! he hands around his sheet to inform everybody about what has been said. HusQy If you think that this is the most efficient form of communication for people sitting in the same room and the answer to scalability is to make bigger BINGO sheets or to allow people to solve the puzzle faster then you will most probably never understand what IOTA is working on. -------------------------------------------------------------------------------- HusQy **Blockchain does not work with too many equally weighted validators.****If 400 validators produce a validating statement (block) at the same time then only one can survive as part of a longest chain.**IOTA is all about collaborative validation. **Another problem of blockchain is that every transaction gets sent twice through the network. Once from the nodes to the miners and a 2nd time from the miners as part of a block.**Blockchain will therefore always only be able to use 50% of the network throughput. And****the last problem is that you can not arbitrarily decrease the time between blocks as it breaks down if the time between blocks gets smaller than the average network delay. The idle time between blocks is precious time that could be used for processing transactions. ----------------------------------------------------------------------------- HusQy I am not talking about a system with a fixed number of validators but one that is completely open and permissionless where any new company can just spin up a node and take part in the network. ------------------------------------------------------------------------ HusQy Proof of Work and Proof of Stake are both centralizing sybil-protection mechanism. I don't think that Satoshi wanted 14 mining pools to run the network. And "economic clustering" was always the "end game" of IOTA. ----------------------------------------------------------------------------- HusQy **Using Proof of Stake is not trustless. Proof of Stake means you trust the richest people and hope that they approve your transactions. The rich are getting richer (through your fees) and you are getting more and more dependant on them.**Is that your vision of the future? ---------------------------------------------------------------------------- HusQy Please read again exactly what I wrote. I have not spoken of introducing governance by large companies, nor have I said that IOTA should be permissioned. We aim for a network with millions or even billions of nodes. HusQy That can't work at all with a permissioned ledger - who should then drop off all these devices or authorize them to participate in the network? My key message was the following: Proof of Work and Proof of Stake will always be if you split them up via sharding ... HusQy ... less secure because you simply need fewer coins or less hash power to have the majority of the votes in a shard. This is not the case with trust in society and the economy. When all companies in the world jointly secure a DLT ... HusQy ... then these companies could install any number of servers in any number of shards without compromising security, because "trust" does not become less just because they operate several servers. First of all, that is a fact and nothing else. HusQy Proof of Work and Proof of Stake are contrary to the assumption of many not "trustless" but follow the maxim: "In the greed of miners we trust!" The basic assumption that the miners do not destroy the system that generates income for them is fundamental here for the ... HusQy ... security of every DLT. I think a similar assumption would still be correct for the economy as a whole: The companies of the world (and not just the big ones) would not destroy the system with which their customers pay them. In this respect, a system would be ... HusQy ... which is validated by society and the economy as a whole probably just as "safely" as a system which is validated by a few anonymous miners. Why a small elite of miners should be better validators than any human and ... HusQy ... To be honest, companies in this world do not open up to me. As already written in my other thread, safe money does not bring you anything if you have to assume that Volkswagen will beat you up and throw you out of the store after you ... HusQy ... paid for a car. The thoughts I discussed say nothing about the immediate future of IOTA (we use for Coordicide mana) but rather speak of a world where DLT has already become an integral part of our lives and we ... HusQy ... a corresponding number of companies, non-profit organizations and people have used DLT and where such a system could be implemented. The point here is not to create a governance solution that in any way influences the development of technology ... HusQy ... or have to give nodes their OK first, but about developing a system that enables people to freely choose the validators they trust. For example, you can also declare your grandma to be a validator when you install your node or your ... HusQy ... local supermarket. Economic relationships in the real world usually form a close-knit network and it doesn't really matter who you follow as long as the majority is honest. I also don't understand your criticism of censorship, because something like that in IOTA ... HusQy ... is almost impossible. Each transaction confirms two other transactions which is growing exponentially. If someone wanted to ignore a transaction, he would have to ignore an exponential number of other transactions after a very short time. In contrast to blockchain ... HusQy ... validators in IOTA do not decide what is included in the ledger, but only decide which of several double spends should be confirmed. Honest transactions are confirmed simply by having other transactions reference them ... HusQy ... and the "validators" are not even asked. As for the "dust problem", this is indeed something that is a bigger problem for IOTA than for other DLTs because we have no fees, but it is also not an unsolvable problem. Bitcoin initially has a ... HusQy Solved similar problem by declaring outputs with a minimum amount of 5430 satoshis as invalid (github.com/Bitcoin/Bitcoi…). A similar solution where an address must contain a minimum amount is also conceivable for IOTA and we are discussing ... HusQy ... several possibilities (including compressing dust using cryptographic methods). Contrary to your assumption, checking such a minimum amount is not slow but just as fast as checking a normal transaction. And mine ... HusQy ... In my opinion this is no problem at all for IOTA's use case. The important thing is that you can send small amounts, but after IOTA is feeless it is also okay to expect the recipients to regularly send their payments on a ... HusQy ... merge address. The wallets already do this automatically (sweeping) and for machines it is no problem to automate this process. So far this was not a problem because the TPS were limited but with the increased TPS throughput of ... HusQy ... Chrysalis it becomes relevant and appropriate solutions are discussed and then implemented accordingly. I think that was the most important thing first and if you have further questions just write :) HusQy And to be very clear! I really appreciate you and your questions and don't see this as an attack at all! People who see such questions as inappropriate criticism should really ask whether they are still objective. I have little time at the moment because ... HusQy ... my girlfriend is on tour and has to take care of our daughter, but as soon as she is back we can discuss these things in a video. I think that the concept of including the "real world" in the concepts of DLT is really exciting and ... HusQy ... that would certainly be exciting to discuss in a joint video. But again, that's more of a vision than a specific plan for the immediate future. This would not work with blockchain anyway but IOTA would be compatible so why not think about such things. ----------------------------------------------------------------------- HusQy All good my big one :P But actually not that much has changed. There has always been the concept of "economic clustering" which is basically based on similar ideas. We are just now able to implement things like this for the first time. ---------------------------------------------------------------------------------- HusQy Exactly. It would mean that addresses "cost" something but I would rather pay a few cents than fees for each transaction. And you can "take" this minimum amount with you every time you change to a new address. HusQy All good my big one :P But actually not that much has changed. There has always been the concept of "economic clustering" which is basically based on similar ideas. We are just now able to implement things like this for the first time. ----------------------------------------------------------------------------------- Relax오늘 오전 1:17 Btw. Hans (sorry for interrupting this convo) but what make people say that IOTA is going the permissioned way because of your latest tweets? I don't get why some people are now forecasting that... Is it because of missing specs or do they just don't get the whole idea? Hans Moog [IF]오늘 오전 1:20 its bullshitu/Relaxanidentity based system would still be open and permissionless where everybody can choose the actors that they deem trustworthy themselves but thats anyway just sth that would be applicable with more adoption [오전 1:20] for now we use mana as a predecessor to an actual reputation system Sissors오늘 오전 1:31 If everybody has to choose actors they deem trustworthy, is it still permissionless? Probably will become a bit a semantic discussion, but still Hans Moog [IF]오늘 오전 1:34 Of course its permissionless you can follow your grandma if you want to :p Sissors오늘 오전 1:36 Well sure you can, but you will need to follow something which has a majority of the voting power in the network. Nice that you follow your grandma, but if others dont, her opinion (or well her nodes opinion) is completely irrelevant Hans Moog [IF]오늘 오전 1:37 You would ideally follow the people that are trustworthy rather than your local drug dealers yeah Sissors오늘 오전 1:38 And tbh, sure if you do it like that is easy. If you just make the users responsible for only connection to trustworthy nodes Hans Moog [IF]오늘 오전 1:38 And if your grandma follows her supermarket and some other people she deems trustworthy then thats fine as well [오전 1:38] + you dont have just 1 actor that you follow Sissors오늘 오전 1:38 No, you got a large list, since yo uwant to follow those which actually matter. So you jsut download a standard list from the internet Hans Moog [IF]오늘 오전 1:39 You can do that [오전 1:39] Is bitcoin permissionless? Should we both try to become miners? [오전 1:41] I mean miners that actually matter and not find a block every 10 trillion years 📷 [오전 1:42] If you would want to become a validator then you would need to build up trust among other people - but anybody can still run a node and issue transactions unlike in hashgraph where you are not able to run your own nodes(수정됨) [오전 1:48] Proof of Stake is also not trustless - it just has a builtin mechanism that downloads the trusted people from the blockchain itself (the richest dudes) Sissors오늘 오전 1:52 I think most agree it would be perfect if every person had one vote. Which is pr oblematic to implement of course. But I really wonder if the solution is to just let users decide who to trust. At the very least I expect a quite centralized network Hans Moog [IF]오늘 오전 1:53 of course even a trust based system would to a certain degree be centralized as not every person is equally trustworthy as for example a big cooperation [오전 1:53] but I think its gonna be less centralized than PoS or PoW [오전 1:53] but anyway its sth for "after coordicide" [오전 1:54] there are not enough trusted entities that are using DLT, yet to make such a system work reasonably well [오전 1:54] I think the reason why blockchain has not really started to look into these kind of concepts is because blockchain doesnt work with too many equally weighted validators [오전 1:56] I believe that DLT is only going to take over the world if it is actually "better" than existing systems and with better I mean cheaper, more secure and faster and PoS and PoW will have a very hard time to deliver that [오전 1:56] especially if you consider that its not only going to settle value transfers Relax오늘 오전 1:57 I like this clear statements, it makes it really clear that DLT is still in its infancy Hans Moog [IF]오늘 오전 1:57 currently bank transfers are order of magnitude cheaper than BTC or ETH transactions Hans Moog [IF]오늘 오전 1:57 and we you think that people will adopt it just because its crypto then I think we are mistaken [오전 1:57] The tech needs to actually solve a problem [오전 1:57] and tbh. currently people use PayPal and other companies to settle their payments [오전 1:58] having a group of the top 500 companies run such a service together is already much better(수정됨) [오전 1:58] especially if its fast and feeless [오전 2:02] and the more people use it, the more decentralized it actually becomes [오전 2:02] because you have more trustworthy entities to choose of Evaldas [IF]오늘 오전 2:08 "in the greed of miners we trust"
Maybe it's time to discuss bitcoin's history again. Credit to u/singularity87 for the original post over 3 years ago. People should get the full story of bitcoin because it is probably one of the strangest of all reddit subs. bitcoin, the main sub for the bitcoin community is held and run by a person who goes by the pseudonym u/theymos. Theymos not only controls bitcoin, but also bitcoin.org and bitcointalk.com. These are top three communication channels for the bitcoin community, all controlled by just one person. For most of bitcoin's history this did not create a problem (at least not an obvious one anyway) until around mid 2015. This happened to be around the time a new player appeared on the scene, a for-profit company called Blockstream. Blockstream was made up of/hired many (but not all) of the main bitcoin developers. (To be clear, Blockstream was founded before mid 2015 but did not become publicly active until then). A lot of people, including myself, tried to point out there we're some very serious potential conflicts of interest that could arise when one single company controls most of the main developers for the biggest decentralised and distributed cryptocurrency. There were a lot of unknowns but people seemed to give them the benefit of the doubt because they were apparently about to release some new software called "sidechains" that could offer some benefits to the network. Not long after Blockstream came on the scene the issue of bitcoin's scalability once again came to forefront of the community. This issue came within the community a number of times since bitcoins inception. Bitcoin, as dictated in the code, cannot handle any more than around 3 transactions per second at the moment. To put that in perspective Paypal handles around 15 transactions per second on average and VISA handles something like 2000 transactions per second. The discussion in the community has been around how best to allow bitcoin to scale to allow a higher number of transactions in a given amount of time. I suggest that if anyone is interested in learning more about this problem from a technical angle, they go to btc and do a search. It's a complex issue but for many who have followed bitcoin for many years, the possible solutions seem relatively obvious. Essentially, currently the limit is put in place in just a few lines of code. This was not originally present when bitcoin was first released. It was in fact put in place afterwards as a measure to stop a bloating attack on the network. Because all bitcoin transactions have to be stored forever on the bitcoin network, someone could theoretically simply transmit a large number of transactions which would have to be stored by the entire network forever. When bitcoin was released, transactions were actually for free as the only people running the network were enthusiasts. In fact a single bitcoin did not even have any specific value so it would be impossible set a fee value. This meant that a malicious person could make the size of the bitcoin ledger grow very rapidly without much/any cost which would stop people from wanting to join the network due to the resource requirements needed to store it, which at the time would have been for very little gain. Towards the end of the summer last year, this bitcoin scaling debate surfaced again as it was becoming clear that the transaction limit for bitcoin was semi regularly being reached and that it would not be long until it would be regularly hit and the network would become congested. This was a very serious issue for a currency. Bitcoin had made progress over the years to the point of retailers starting to offer it as a payment option. Bitcoin companies like, Microsoft, Paypal, Steam and many more had began to adopt it. If the transaction limit would be constantly maxed out, the network would become unreliable and slow for users. Users and businesses would not be able to make a reliable estimate when their transaction would be confirmed by the network. Users, developers and businesses (which at the time was pretty much the only real bitcoin subreddit) started to discuss how we should solve the problem bitcoin. There was significant support from the users and businesses behind a simple solution put forward by the developer Gavin Andreesen. Gavin was the lead developer after Satoshi Nakamoto left bitcoin and he left it in his hands. Gavin initially proposed a very simple solution of increasing the limit which was to change the few lines of code to increase the maximum number of transactions that are allowed. For most of bitcoin's history the transaction limit had been set far far higher than the number of transactions that could potentially happen on the network. The concept of increasing the limit one time was based on the fact that history had proven that no issue had been cause by this in the past. A certain group of bitcoin developers decided that increasing the limit by this amount was too much and that it was dangerous. They said that the increased use of resources that the network would use would create centralisation pressures which could destroy the network. The theory was that a miner of the network with more resources could publish many more transactions than a competing small miner could handle and therefore the network would tend towards few large miners rather than many small miners. The group of developers who supported this theory were all developers who worked for the company Blockstream. The argument from people in support of increasing the transaction capacity by this amount was that there are always inherent centralisation pressure with bitcoin mining. For example miners who can access the cheapest electricity will tend to succeed and that bigger miners will be able to find this cheaper electricity easier. Miners who have access to the most efficient computer chips will tend to succeed and that larger miners are more likely to be able to afford the development of them. The argument from Gavin and other who supported increasing the transaction capacity by this method are essentially there are economies of scale in mining and that these economies have far bigger centralisation pressures than increased resource cost for a larger number of transactions (up to the new limit proposed). For example, at the time the total size of the blockchain was around 50GB. Even for the cost of a 500GB SSD is only $150 and would last a number of years. This is in-comparison to the $100,000's in revenue per day a miner would be making. Various developers put forth various other proposals, including Gavin Andresen who put forth a more conservative increase that would then continue to increase over time inline with technological improvements. Some of the employees of blockstream also put forth some proposals, but all were so conservative, it would take bitcoin many decades before it could reach a scale of VISA. Even though there was significant support from the community behind Gavin's simple proposal of increasing the limit it was becoming clear certain members of the bitcoin community who were part of Blockstream were starting to become increasingly vitriolic and divisive. Gavin then teamed up with one of the other main bitcoin developers Mike Hearn and released a coded (i.e. working) version of the bitcoin software that would only activate if it was supported by a significant majority of the network. What happened next was where things really started to get weird. After this free and open source software was released, Theymos, the person who controls all the main communication channels for the bitcoin community implemented a new moderation policy that disallowed any discussion of this new software. Specifically, if people were to discuss this software, their comments would be deleted and ultimately they would be banned temporarily or permanently. This caused chaos within the community as there was very clear support for this software at the time and it seemed our best hope for finally solving the problem and moving on. Instead a censorship campaign was started. At first it 'all' they were doing was banning and removing discussions but after a while it turned into actively manipulating the discussion. For example, if a thread was created where there was positive sentiment for increasing the transaction capacity or being negative about the moderation policies or negative about the actions of certain bitcoin developers, the mods of bitcoin would selectively change the sorting order of threads to 'controversial' so that the most support opinions would be sorted to the bottom of the thread and the most vitriolic would be sorted to the top of the thread. This was initially very transparent as it was possible to see that the most downvoted comments were at the top and some of the most upvoted were at the bottom. So they then implemented hiding the voting scores next to the users name. This made impossible to work out the sentiment of the community and when combined with selectively setting the sorting order to controversial it was possible control what information users were seeing. Also, due to the very very large number of removed comments and users it was becoming obvious the scale of censorship going on. To hide this they implemented code in their CSS for the sub that completely hid comments that they had removed so that the censorship itself was hidden. Anyone in support of scaling bitcoin were removed from the main communication channels. Theymos even proudly announced that he didn't care if he had to remove 90% of the users. He also later acknowledged that he knew he had the ability to block support of this software using the control he had over the communication channels. While this was all going on, Blockstream and it's employees started lobbying the community by paying for conferences about scaling bitcoin, but with the very very strange rule that no decisions could be made and no complete solutions could be proposed. These conferences were likely strategically (and successfully) created to stunt support for the scaling software Gavin and Mike had released by forcing the community to take a "lets wait and see what comes from the conferences" kind of approach. Since no final solutions were allowed at these conferences, they only served to hinder and splinter the communities efforts to find a solution. As the software Gavin and Mike released called BitcoinXT gained support it started to be attacked. Users of the software were attack by DDOS. Employees of Blockstream were recommending attacks against the software, such as faking support for it, to only then drop support at the last moment to put the network in disarray. Blockstream employees were also publicly talking about suing Gavin and Mike from various different angles simply for releasing this open source software that no one was forced to run. In the end Mike Hearn decided to leave due to the way many members of the bitcoin community had treated him. This was due to the massive disinformation campaign against him on bitcoin. One of the many tactics that are used against anyone who does not support Blockstream and the bitcoin developers who work for them is that you will be targeted in a smear campaign. This has happened to a number of individuals and companies who showed support for scaling bitcoin. Theymos has threatened companies that he will ban any discussion of them on the communication channels he controls (i.e. all the main ones) for simply running software that he disagrees with (i.e. any software that scales bitcoin). As time passed, more and more proposals were offered, all against the backdrop of ever increasing censorship in the main bitcoin communication channels. It finally come down the smallest and most conservative solution. This solution was much smaller than even the employees of Blockstream had proposed months earlier. As usual there was enormous attacks from all sides and the most vocal opponents were the employees of Blockstream. These attacks still are ongoing today. As this software started to gain support, Blockstream organised more meetings, especially with the biggest bitcoin miners and made a pact with them. They promised that they would release code that would offer an on-chain scaling solution hardfork within about 4 months, but if the miners wanted this they would have to commit to running their software and only their software. The miners agreed and the ended up not running the most conservative proposal possible. This was in February last year. There is no hardfork proposal in sight from the people who agreed to this pact and bitcoin is still stuck with the exact same transaction limit it has had since the limit was put in place about 6 years ago. Gavin has also been publicly smeared by the developers at Blockstream and a plot was made against him to have him removed from the development team. Gavin has now been, for all intents an purposes, expelled from bitcoin development. This has meant that all control of bitcoin development is in the hands of the developers working at Blockstream. There is a new proposal that offers a market based approach to scaling bitcoin. This essentially lets the market decide. Of course, as usual there has been attacks against it, and verbal attacks from the employees of Blockstream. This has the biggest chance of gaining wide support and solving the problem for good. To give you an idea of Blockstream; It has hired most of the main and active bitcoin developers and is now synonymous with the "Core" bitcoin development team. They AFAIK no products at all. They have received around $75m in funding. Every single thing they do is supported by theymos. They have started implementing an entirely new economic system for bitcoin against the will of it's users and have blocked any and all attempts to scaling the network in line with the original vision. Although this comment is ridiculously long, it really only covers the tip of the iceberg. You could write a book on the last two years of bitcoin. The things that have been going on have been mind blowing. One last thing that I think is worth talking about is the u/bashco's claim of vote manipulation. The users that the video talks about have very very large numbers of downvotes mostly due to them having a very very high chance of being astroturfers. Around about the same time last year when Blockstream came active on the scene every single bitcoin troll disappeared, and I mean literally every single one. In the years before that there were a large number of active anti-bitcoin trolls. They even have an active sub buttcoin. Up until last year you could go down to the bottom of pretty much any thread in bitcoin and see many of the usual trolls who were heavily downvoted for saying something along the lines of "bitcoin is shit", "You guys and your tulips" etc. But suddenly last year they all disappeared. Instead a new type of bitcoin user appeared. Someone who said they were fully in support of bitcoin but they just so happened to support every single thing Blockstream and its employees said and did. They had the exact same tone as the trolls who had disappeared. Their way to talking to people was aggressive, they'd call people names, they had a relatively poor understanding of how bitcoin fundamentally worked. They were extremely argumentative. These users are the majority of the list of that video. When the 10's of thousands of users were censored and expelled from bitcoin they ended up congregating in btc. The strange thing was that the users listed in that video also moved over to btc and spend all day everyday posting troll-like comments and misinformation. Naturally they get heavily downvoted by the real users in btc. They spend their time constantly causing as much drama as possible. At every opportunity they scream about "censorship" in btc while they are happy about the censorship in bitcoin. These people are astroturfers. What someone somewhere worked out, is that all you have to do to take down a community is say that you are on their side. It is an astoundingly effective form of psychological attack.
Bitcoin Evolution Bitcoin evolution was imagined following quite a while of investigation into cryptography by programming designer, Satoshi Nakamoto (accepted to be an alias), planned the calculation and presented it in 2009. His actual character stays a puzzle. This money isn't upheld by a substantial ware, (for example, gold or silver); Bitcoin evolutions are exchanged online which makes them a product in themselves. Bitcoin evolution is an open-source item, available by any individual who is a client. All you require is an email address, Internet access, and cash to begin. Bitcoin Evolution https://preview.redd.it/uh49cgp65lv51.jpg?width=300&format=pjpg&auto=webp&s=e02cd66fecd50d0ee052d4be142351516f45ce54 Where does it originate from? Bitcoin evolution is mined on a disseminated PC organization of clients running specific programming; the organization fathoms certain numerical verifications, and looks for a specific information grouping ("block") that creates a specific example when the BTC calculation is applied to it. A match delivers a Bitcoin evolution. It's mind boggling and time-and energy-devouring. Just 21 million Bitcoin evolutions are ever to be mined (around 11 million are right now available for use). The numerical questions the organization PCs illuminate get continuously more hard to keep the mining activities and flexibly under control. This organization likewise approves all the exchanges through cryptography. How accomplishes Bitcoin evolution work? Web clients move computerized resources (bits) to one another on an organization. There is no online bank; rather, Bitcoin evolution has been depicted as an Internet-wide circulated record. Clients purchase Bitcoin evolution with money or by selling an item or administration for Bitcoin evolution. Bitcoin evolution wallets store and utilize this computerized cash. Clients may sell out of this virtual record by exchanging their Bitcoin evolution to another person who needs access. Anybody can do this, anyplace on the planet. Bitcoin Evolution There are cell phone applications for leading versatile Bitcoin evolution exchanges and Bitcoin evolution trades are populating the Internet. How is Bitcoin evolution esteemed? Bitcoin evolution isn't held or constrained by a monetary establishment; it is totally decentralized. Not at all like genuine cash it can't be degraded by governments or banks. All things considered, Bitcoin evolution's worth lies basically in its acknowledgment between clients as a type of installment and on the grounds that its gracefully is limited. Its worldwide money esteems vacillate as indicated by flexibly and request and market hypothesis; as more individuals make wallets and hold and spend Bitcoin evolutions, and more organizations acknowledge it, Bitcoin evolution's worth will rise. Banks are currently attempting to esteem Bitcoin evolution and some speculation sites foresee the cost of a Bitcoin evolution will be a few thousand dollars in 2014. What are its advantages? There are advantages to buyers and dealers that need to utilize this installment alternative.
Quick exchanges - Bitcoin evolution is moved in a split second over the Internet.
No expenses/low charges - Unlike Mastercards, Bitcoin evolution can be utilized for nothing or low charges. Without the brought together organization as center man, there are no approvals (and charges) required. This improves net revenues deals. Bitcoin Evolution
Wipes out extortion hazard - Only the Bitcoin evolution proprietor can send installment to the planned beneficiary, who is the one in particular who can get it. The organization realizes the exchange has happened and exchanges are approved; they can't be tested or reclaimed. This is large for online dealers who are regularly liable to Visa processors' appraisals of whether an exchange is deceitful, or organizations that follow through on the significant expense of Mastercard chargebacks.
Information is secure - As we have seen with ongoing hacks on public retailers' installment preparing frameworks, the Internet isn't generally a protected spot for private information. With Bitcoin evolution, clients don't surrender private data.
a. They have two keys - a public key that fills in as the Bitcoin evolution address and a private key with individual information. b. Exchanges are "marked" carefully by joining people in general and private keys; a numerical capacity is applied and a declaration is produced demonstrating the client started the exchange. Advanced marks are one of a kind to every exchange and can't be re-utilized. Bitcoin Evolution c. The vendobeneficiary never observes your mystery data (name, number, physical location) so it's fairly mysterious however it is detectable (to the Bitcoin evolution address on the public key).
Advantageous installment framework - Merchants can utilize Bitcoin evolution totally as an installment framework; they don't need to hold any Bitcoin evolution money since Bitcoin evolution can be changed over to dollars. Customers or dealers can exchange and out of Bitcoin evolution and different monetary standards whenever.
Global installments - Bitcoin evolution is utilized far and wide; web based business traders and specialist organizations can without much of a stretch acknowledge worldwide installments, which open up new likely commercial centers for them.
Simple to follow - The organization tracks and for all time logs each exchange in the Bitcoin evolution block chain (the information base). On account of conceivable bad behavior, it is simpler for law requirement authorities to follow these exchanges. https://www.bitcoinevolutionpro.com/
How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?
https://preview.redd.it/n5jzxozn27v51.png?width=2222&format=png&auto=webp&s=6cd6bd726582bbe2c595e1e467aeb3fc8aabe36f On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing. Sharing Session Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers. Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well. Q & A Deep Chain Finance: First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself? Eric: First of all, let me answer the first question, what kind of project is EpiK Protocol. With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence. In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity And then, for what reason was the fork chosen in the first place? EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario. However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves. The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land. After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent. Finally, what differences exist between the forked project and Filecoin itself? On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience. EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible. From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral. Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector. What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining. Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins. If the miner can re-submit the proof of time and space within 28 days, he will regain the power. Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power. The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed. Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain. Deep Chain Finance: Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL? Eric: Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology. These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power. There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough. The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users. The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners. Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them. The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power. Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground. For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier. Deep Chain Finance: Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”? Leo: Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret. EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource. With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small. We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin. Deep Chain Finance: Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone? Eric: First, let me explain the advantages of the E2P model over the P2P model. There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched. The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs. The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert. The first describes the relationship between the two roles. The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts. The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks. Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality. If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth. When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem. The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios. I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all. The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way. For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter. The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph. Deep Chain Finance: Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved? Leo: Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first. There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”. How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison. What incentive does the average user have to spend more money on decentralized storage to store data? Is it safer? Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each. More private? There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon. Faster? The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision. The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in. Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence. Deep Chain Finance: Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation? Leo: We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently. However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land. We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another. Deep Chain Finance: How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future? Eric: The development of the EpiK Protocol is divided into 5 major phases. (a) Phase I testing of the network “Obelisk”. Phase II Main Network 1.0 “Rosetta”. Phase III Main Network 2.0 “Hammurabi”. (a) The Phase IV Enrichment Knowledge Mapping Toolkit. The fifth stage is to enrich the knowledge graph application ecology. Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis. We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet. In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing. Users’ Questions User 1: Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think? Eric: First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track. First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project. Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects. Filecoin consumes the value of the next few years this time, so it will plunge. Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable. User2: Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK? Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
Don't blindly follow a narrative, its bad for you and its bad for crypto in general
I mostly lurk around here but I see a pattern repeating over and over again here and in multiple communities so I have to post. I'm just posting this here because I appreciate the fact that this sub is a place of free speech and maybe something productive can come out from this post, while bitcoin is just fucking censorship, memes and moon/lambo posts. If you don't agree, write in the comments why, instead of downvoting. You don't have to upvote either, but when you downvote you are killing the opportunity to have discussion. If you downvote or comment that I'm wrong without providing any counterpoints you are no better than the BTC maxis you despise. In various communities I see a narrative being used to bring people in and making them follow something without thinking for themselves. In crypto I see this mostly in BTC vs BCH tribalistic arguments: - BTC community: "Everything that is not BTC is shitcoin." or more recently as stated by adam on twitter, "Everything that is not BTC is a ponzi scheme, even ETH.", "what is ETH supply?", and even that they are doing this for "altruistic" reasons, to "protect" the newcomers. Very convenient for them that they are protecting the newcomers by having them buy their bags - BCH community: "BTC maxis are dumb", "just increase block size and you will have truly p2p electronic cash", "It is just that simple, there are no trade offs", "if you don't agree with me you are a BTC maxi", "BCH is satoshi's vision for p2p electronic cash" It is not exclusive to crypto but also politics, and you see this over and over again on twitter and on reddit. My point is, that narratives are created so people don't have to think, they just choose a narrative that is easy to follow and makes sense for them, and stick with it. And people keep repeating these narratives to bring other people in, maybe by ignorance, because they truly believe it without questioning, or maybe by self interest, because they want to shill you their bags. Because this is BCH community, and because bitcoin is censored, so I can't post there about the problems in the BTC narrative (some of which are IMO correctly identified by BCH community), I will stick with the narrative I see in the BCH community. The culprit of this post was firstly this post by user u/scotty321"The BTC Paradox: “A 1 MB blocksize enables poor people to run their own node!” “Okay, then what?” “Poor people won’t be able to use the network!”". You will see many posts of this kind being made by u/Egon_1 also. Then you have also this comment in that thread by u/fuck_____________1 saying that people that want to run their own nodes are retarded and that there is no reason to want to do that. "Just trust block explorer websites". And the post and comment were highly upvoted. Really? You really think that there is no problem in having just a few nodes on the network? And that the only thing that secures the network are miners? As stated by user u/co1nsurf3r in that thread:
While I don't think that everybody needs to run a node, a full node does publish blocks it considers valid to other nodes. This does not amount to much if you only consider a single node in the network, but many "honest" full nodes in the network will reduce the probability of a valid block being withheld from the network by a collusion of "hostile" node operators.
But surely this will not get attention here, and will be downvoted by those people that promote the narrative that there is no trade off in increasing the blocksize and the people that don't see it are retarded or are btc maxis. The only narrative I stick to and have been for many years now is that cryptocurrency takes power from the government and gives power to the individual, so you are not restricted to your economy as you can participate in the global economy. There is also the narrative of banking the bankless, which I hope will come true, but it is not a use case we are seeing right now. Some people would argue that removing power from gov's is a bad thing, but you can't deny the fact that gov's can't control crypto (at least we would want them not to). But, if you really want the individuals to remain in control of their money and transact with anyone in the world, the network needs to be very resistant to any kind of attacks. How can you have p2p electronic cash if your network just has a handful couple of nodes and the chinese gov can locate them and just block communication to them? I'm not saying that this is BCH case, I'm just refuting the fact that there is no value in running your own node. If you are relying on block explorers, the gov can just block the communication to the block explorer websites. Then what? Who will you trust to get chain information? The nodes needs to be decentralized so if you take one node down, many more can appear so it is hard to censor and you don't have few points of failure. Right now BTC is focusing on that use case of being difficult to censor. But with that comes the problem that is very expensive to transact on the network, which breaks the purpose of anyone being able to participate. Obviously I do think that is also a major problem, and lightning network is awful right now and probably still years away of being usable, if it ever will. The best solution is up for debate, but thinking that you just have to increase the blocksize and there is no trade off is just naive or misleading. BCH is doing a good thing in trying to come with a solution that is inclusive and promotes cheap and fast transactions, but also don't forget centralization is a major concern and nothing to just shrug off. Saying that "a 1 MB blocksize enables poor people to run their own" and that because of that "Poor people won’t be able to use the network" is a misrepresentation designed to promote a narrative. Because 1MB is not to allow "poor" people to run their node, it is to facilitate as many people to run a node to promote decentralization and avoid censorship. Also an elephant in the room that you will not see being discussed in either BTC or BCH communities is that mining pools are heavily centralized. And I'm not talking about miners being mostly in china, but also that big pools control a lot of hashing power both in BTC and BCH, and that is terrible for the purpose of crypto. Other projects are trying to solve that. Will they be successful? I don't know, I hope so, because I don't buy into any narrative. There are many challenges and I want to see crypto succeed as a whole. As always guys, DYOR and always question if you are not blindly following a narrative. I'm sure I will be called BTC maxi but maybe some people will find value in this. Don't trust guys that are always posting silly "gocha's" against the other "tribe". EDIT: User u/ShadowOfHarbringer has pointed me to some threads that this has been discussed in the past and I will just put my take on them here for visibility, as I will be using this thread as a reference in future discussions I engage:
When there was only 2 nodes in the network, adding a third node increased redundancy and resiliency of the network as a whole in a significant way. When there is thousands of nodes in the network, adding yet another node only marginally increase the redundancy and resiliency of the network. So the question then becomes a matter of personal judgement of how much that added redundancy and resiliency is worth. For the absolutist, it is absolutely worth it and everyone on this planet should do their part.
What is the magical number of nodes that makes it counterproductive to add new nodes? Did he do any math? Does BCH achieve this holy grail safe number of nodes? Guess what, nobody knows at what number of nodes is starts to be marginally irrelevant to add new nodes. Even BTC today could still not have enough nodes to be safe. If you can't know for sure that you are safe, it is better to try to be safer than sorry. Thousands of nodes is still not enough, as I said, it is much cheaper to run a full node as it is to mine. If it costs millions in hash power to do a 51% attack on the block generation it means nothing if it costs less than $10k to run more nodes than there are in total in the network and cause havoc and slowing people from using the network. Or using bot farms to DDoS the 1000s of nodes in the network. Not all attacks are monetarily motivated. When you have governments with billions of dollars at their disposal and something that could threat their power they could do anything they could to stop people from using it, and the cheapest it is to do so the better
You should run a full node if you're a big business with e.g. >$100k/month in volume, or if you run a service that requires high fraud resistance and validation certainty for payments sent your way (e.g. an exchange). For most other users of Bitcoin, there's no good reason to run a full node unless you reel like it.
Shouldn't individuals benefit from fraud resistance too? Why just businesses?
Personally, I think it's a good idea to make sure that people can easily run a full node because they feel like it, and that it's desirable to keep full node resource requirements reasonable for an enthusiast/hobbyist whenever possible. This might seem to be at odds with the concept of making a worldwide digital cash system in which all transactions are validated by everybody, but after having done the math and some of the code myself, I believe that we should be able to have our cake and eat it too.
This is recurrent argument, but also no math provided, "just trust me I did the math"
The biggest reason individuals may want to run their own node is to increase their privacy. SPV wallets rely on others (nodes or ElectronX servers) who may learn their addresses.
It is a reason and valid one but not the biggest reason
If you do it for fun and experimental it good. If you do it for extra privacy it's ok. If you do it to help the network don't. You are just slowing down miners and exchanges.
Yes it will slow down the network, but that shows how people just don't get the the trade off they are doing
I will just copy/paste what Satoshi Nakamoto said in his own words. "The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server."
Another "it is all or nothing argument" and quoting satoshi to try and prove their point. Just because every user doesn't need to be also a full node doesn't mean that there aren't serious risks for having few nodes
For this to have any importance in practice, all of the miners, all of the exchanges, all of the explorers and all of the economic nodes should go rogue all at once. Collude to change consensus. If you have a node you can detect this. It doesn't do much, because such a scenario is impossible in practice.
Not true because as I said, you can DDoS the current nodes or run more malicious nodes than that there currently are, because is cheap to do so
Non-mining nodes don't contribute to adding data to the blockchain ledger, but they do play a part in propagating transactions that aren't yet in blocks (the mempool). Bitcoin client implementations can have different validations for transactions they see outside of blocks and transactions they see inside of blocks; this allows for "soft forks" to add new types of transactions without completely breaking older clients (while a transaction is in the mempool, a node receiving a transaction that's a new/unknown type could drop it as not a valid transaction (not propagate it to its peers), but if that same transaction ends up in a block and that node receives the block, they accept the block (and the transaction in it) as valid (and therefore don't get left behind on the blockchain and become a fork). The participation in the mempool is a sort of "herd immunity" protection for the network, and it was a key talking point for the "User Activated Soft Fork" (UASF) around the time the Segregated Witness feature was trying to be added in. If a certain percentage of nodes updated their software to not propagate certain types of transactions (or not communicate with certain types of nodes), then they can control what gets into a block (someone wanting to get that sort of transaction into a block would need to communicate directly to a mining node, or communicate only through nodes that weren't blocking that sort of transaction) if a certain threshold of nodes adheres to those same validation rules. It's less specific than the influence on the blockchain data that mining nodes have, but it's definitely not nothing.
The first reasonable comment in that thread but is deep down there with only 1 upvote
The addition of non-mining nodes does not add to the efficiency of the network, but actually takes away from it because of the latency issue.
That is true and is actually a trade off you are making, sacrificing security to have scalability
The addition of non-mining nodes has little to no effect on security, since you only need to destroy mining ones to take down the network
It is true that if you destroy mining nodes you take down the network from producing new blocks (temporarily), even if you have a lot of non mining nodes. But, it still better than if you take down the mining nodes who are also the only full nodes. If the miners are not the only full nodes, at least you still have full nodes with the blockchain data so new miners can download it and join. If all the miners are also the full nodes and you take them down, where will you get all the past blockchain data to start mining again? Just pray that the miners that were taken down come back online at some point in the future?
The real limiting factor is ISP's: Imagine a situation where one service provider defrauds 4000 different nodes. Did the excessive amount of nodes help at all, when they have all been defrauded by the same service provider? If there are only 30 ISP's in the world, how many nodes do we REALLY need?
You cant defraud if the connection is encrypted. Use TOR for example, it is hard for ISP's to know what you are doing.
Satoshi specifically said in the white paper that after a certain point, number of nodes needed plateaus, meaning after a certain point, adding more nodes is actually counterintuitive, which we also demonstrated. (the latency issue). So, we have adequately demonstrated why running non-mining nodes does not add additional value or security to the network.
Again, what is the number of nodes that makes it counterproductive? Did he do any math?
There's also the matter of economically significant nodes and the role they play in consensus. Sure, nobody cares about your average joe's "full node" where he is "keeping his own ledger to keep the miners honest", as it has no significance to the economy and the miners couldn't give a damn about it. However, if say some major exchanges got together to protest a miner activated fork, they would have some protest power against that fork because many people use their service. Of course, there still needs to be miners running on said "protest fork" to keep the chain running, but miners do follow the money and if they got caught mining a fork that none of the major exchanges were trading, they could be coaxed over to said "protest fork".
In consensus, what matters about nodes is only the number, economical power of the node doesn't mean nothing, the protocol doesn't see the net worth of the individual or organization running that node.
Running a full node that is not mining and not involved is spending or receiving payments is of very little use. It helps to make sure network traffic is broadcast, and is another copy of the blockchain, but that is all (and is probably not needed in a healthy coin with many other nodes)
He gets it right (broadcasting transaction and keeping a copy of the blockchain) but he dismisses the importance of it
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
Technology and some more:
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
Down the rabbit hole
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here. Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017. Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand. Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”.Scilla design story part 1
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
“Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
Smart contract on a sharded environment and state sharding
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
Business & Partnerships
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
Marketing & Community
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
A mining pool is another cost of Bitcoin mining. They are collections integrated to share the blocks’ rewards in proportion to contributed mining hash power. However, they have a negative aspect as they unfortunately take control of the pools’ owners. Yet, Miners have a choice to redirect their power to the various mining pools at any time they want. A mining pool is one of the costs ... The cost of mining 1 Bitcoin can vary depending on several factors. The cost mainly boils down to the type of rig used, the country of mining, and the cost of the software. If you are planning on mining, here are the expenses that are worth considering: Power costs in the region of mining; Pool fees Hash Rate of the rig; Labour; Crashes and unforeseen hacks; Though it may not seem so, the ... The map above shows a very rough estimate of the current electricity costs of mining one Bitcoin by country around the world.. Based on our research, Kuwait is the cheapest country in the world to mine Bitcoins while the Solomon Islands would be the most expensive. Overall, Bitcoin’s total electricity consumption is huge. In November last year we reported that Bitcoin Mining Now Consuming ... Mining bitcoin now demands more computational power than ever before, with mining difficulty reaching a new high of 17.35 trillion, up 9.89% from the previous record posted on July 1.. The new ... Bitcoin Price (BTC). Price chart, trade volume, market cap, and more. Discover new cryptocurrencies to add to your portfolio. Skip to content. Prices. Products. Company. Earn crypto . Get $171+ Sign in. Get started. Price charts Bitcoin price. Bitcoin price (BTC) Add to Watchlist $ 13,070.01 +0.56%. 1h. 24h. 1w. 1m. 1y. all. $0.0000 January 1 12:00 AM. 10:56 AM 3:06 PM 7:17 PM 11:27 PM 3:38 AM ...
IS BITCOIN (BTC) MINING WORTH IT 2020?? -PROFITABLE?
Or worth it? should purchase a bitcoin mining machine (antminer or ASIC) or build your first mini bitcoin mining farm. Btc mining in 2019 profitability brought to you by Bittruth :) Buy Cheap ... What do you need to mine one Bitcoin BTC coin in 2020? Let's review Bitcoin mining profitability and what BTC mining rigs you would need to mine an entire co... Mining Bitcoins requires more specialized hardware than buying Altcoins. Fortunately, Antminer's are here for our rescue. But is mining Bitcoin still profitable in 2017? How much does it cost to ... I discuss the cost of mining Bitcoin. The infographic I refer to in the video can be found at https://i.redd.it/owrsgmc7eu321.jpg Be sure to leave a comment ... Here is just a brief overview of what goes into building these monstrosities. I am no expert at Crypto Currency Mining but these machines do fascinate me and...