Bitcoin Mining Pool Bitcoin.com

Power of the Command Line (bitcoin-cli, hwi, electrum, trezorctl)

I think some of the console tools available with HW wallets today are greatly under utilized. Here's a quick write-up on how to create and sign a TXN very similar to 43d27...1fc06 found on the SLIP-14 wallet. I'll be using TrezorCTL, Electrum, and HWI for the signing. I won't go much into the setup or install, but feel free to ask if you have questions about it. Note, you don't have to use all three of these. Any one will produce a valid signed TXN for broadcast. I just showed how to do it three ways. Whats more some of the Electrum and HWI steps are interchangeable.
ColdCard also has a utility called ckcc that will do the sign operation instead of HWI, but in many ways they are interchangeable. KeepKey and Ledger both have libraries for scripted signing but no one-shot, one-line console apps that I know of. But HWI and Electrum of course work on all four.

TrezorCTL

This is the what most would think of to use to craft and sign TXNs, and is definitely very simple. The signing uses a script called build_tx.py to create a JSON file that is then used by the btc sign-tx command. The whole process is basically:
  1. tools/build_tx.py | trezorctl btc sign-tx -
This just means, take the output of build_tx and sign it. To copy 43d27...1fc06, I wrote a small script to feed build_tx, so my process looks like:
  1. ~/input.sh | tools/build_tx.py | trezorctl btc sign-tx -
But it's all very simple. Note... I used TrezorCTL v0.12.2 but build_tx.py version 0.13.0 1.

input.sh

```

!/bin/bash

secho() { sleep 1; echo $*}
secho "Testnet" # coin name secho "tbtc1.trezor.io" # blockbook server and outpoint (below) secho "e294c4c172c3d87991b0369e45d6af8584be92914d01e3060fad1ed31d12ff00:0" secho "m/84'/1'/0'/0/0" # prev_out derivation to signing key secho "4294967293" # Sequence for RBF; hex(-3) secho "segwit" # Signature type on prev_out to use secho "" # NACK to progress to outs secho "2MsiAgG5LVDmnmJUPnYaCeQnARWGbGSVnr3" # out[0].addr secho "10000000" # out[1].amt secho "tb1q9l0rk0gkgn73d0gc57qn3t3cwvucaj3h8wtrlu" # out[1].addr secho "20000000" # out[1].amt secho "tb1qejqxwzfld7zr6mf7ygqy5s5se5xq7vmt96jk9x" # out[2].addr secho "99999694" # out[2].amt secho "" # NACK to progress to change secho "" # NACK to skip change secho "2" # txn.version secho "0" # txn.locktime ```

Electrum

Electrum is one of the better GUI wallets available, but it also has a pretty good console interface. Like before you need your Trezor with the SLIP-14 wallet loaded and paired to Electrum. I'll assume Electrum is up and running with the Trezor wallet loaded to make things simple.
Like with TrezorCTL, Electrum feeds on a JSON file, but unlike TrezorCTL it needs that JSON squished into the command line. This is a simple sed command, but I won't bore you with the details, but just assume that's done. So the process in Electrum (v4.0.3) looks like:
  1. electrum serialize (create psbt to sign)
  2. electrum --wallet signtransaction (sign said psbt)
Still pretty simple right! Below is the JSON I smushed for #1

txn.json

{ "inputs": [{ "prevout_hash":"e294c4c172c3d87991b0369e45d6af8584be92914d01e3060fad1ed31d12ff00", "prevout_n": 0, "value_sats": 129999867 }], "outputs": [{ "address": "2MsiAgG5LVDmnmJUPnYaCeQnARWGbGSVnr3", "value_sats": 10000000 },{ "address": "tb1q9l0rk0gkgn73d0gc57qn3t3cwvucaj3h8wtrlu", "value_sats": 20000000 },{ "address": "tb1qejqxwzfld7zr6mf7ygqy5s5se5xq7vmt96jk9x", "value_sats": 99999694 }]}

HWI

HWI is an unsung hero in my book. It's a very small clean and simple interface between HW wallets and Bitcoin Core. It currently supports a good range of HW wallets. It keeps itself narrowly focused on TXN signing and offloads most everything else to Bitcoin Core. Again, I'll assume you've imported your Trezor keypool into Core and done the requisite IBD and rescan. And if you don't have the RPC enabled, you can always clone these commands into the QT-console.
To sign our TXN in HWI (v1.1.2), we will first need to craft (and finalize) it in Bitcoin Core (0.21.1). Like in Electrum, we will have to use simple sed to smush some JSON into command arguments, but I'll assume you have that covered. It will take an inputs.json and an outputs.json named separately.
  1. bitcoin-cli createpsbt (create psbt)
  2. bitcoin-cli -rpcwallet= walletprocesspsbt (process psbt)
  3. hwi -f signtx (sign psbt)
  4. bitcoin-cli -rpcwallet= finalizepsbt (get a signed TXN from psbt)
A little more involved, but still nothing too bad. Plus this gives you the full power of Bitcoin Core including integrations with LND (lightning).

inputs.json

[{ "txid": "e294c4c172c3d87991b0369e45d6af8584be92914d01e3060fad1ed31d12ff00", "vout": 0 }]

outputs.json

[{ "2MsiAgG5LVDmnmJUPnYaCeQnARWGbGSVnr3": 0.10000000 },{ "tb1q9l0rk0gkgn73d0gc57qn3t3cwvucaj3h8wtrlu": 0.20000000 },{ "tb1qejqxwzfld7zr6mf7ygqy5s5se5xq7vmt96jk9x": 0.99999694 }]

Conclusion

This may all seem like very low level coding, but is surprisingly simple once you get a knack for it. Whats more, all these platforms support testnet which allows you to practice with valueless coins until you get the hang of it. And, like many things in bitcoin, this is all (mostly) python, which is one of the easier languages to learn.
Enjoy
Footnotes
1 - https://github.com/trezotrezor-firmware/issues/1296
submitted by brianddk to Bitcoin [link] [comments]

Power of the Command Line (bitcoin-cli, hwi, electrum, trezorctl)

I think some of the console tools available with HW wallets today are greatly under utilized. Here's a quick write-up on how to create and sign a TXN very similar to 43d27...1fc06 found on the SLIP-14 wallet. I'll be using TrezorCTL, Electrum, and HWI for the signing. I won't go much into the setup or install, but feel free to ask if you have questions about it. Note, you don't have to use all three of these. Any one will produce a valid signed TXN for broadcast. I just showed how to do it three ways. Whats more some of the Electrum and HWI steps are interchangeable.

TrezorCTL

This is the what most would think of to use to craft and sign TXNs, and is definitely very simple. The signing uses a script called build_tx.py to create a JSON file that is then used by the btc sign-tx command. The whole process is basically:
  1. tools/build_tx.py | trezorctl btc sign-tx -
This just means, take the output of build_tx and sign it. To copy 43d27...1fc06, I wrote a small script to feed build_tx, so my process looks like:
  1. ~/input.sh | tools/build_tx.py | trezorctl btc sign-tx -
But it's all very simple. Note... I used TrezorCTL v0.12.2 but build_tx.py version 0.13.0 1.

input.sh

```

!/bin/bash

secho() { sleep 1; echo $*}
secho "Testnet" # coin name secho "tbtc1.trezor.io" # blockbook server and outpoint (below) secho "e294c4c172c3d87991b0369e45d6af8584be92914d01e3060fad1ed31d12ff00:0" secho "m/84'/1'/0'/0/0" # prev_out derivation to signing key secho "4294967293" # Sequence for RBF; hex(-3) secho "segwit" # Signature type on prev_out to use secho "" # NACK to progress to outs secho "2MsiAgG5LVDmnmJUPnYaCeQnARWGbGSVnr3" # out[0].addr secho "10000000" # out[1].amt secho "tb1q9l0rk0gkgn73d0gc57qn3t3cwvucaj3h8wtrlu" # out[1].addr secho "20000000" # out[1].amt secho "tb1qejqxwzfld7zr6mf7ygqy5s5se5xq7vmt96jk9x" # out[2].addr secho "99999694" # out[2].amt secho "" # NACK to progress to change secho "" # NACK to skip change secho "2" # txn.version secho "0" # txn.locktime ```

Electrum

Electrum is one of the better GUI wallets available, but it also has a pretty good console interface. Like before you need your Trezor with the SLIP-14 wallet loaded and paired to Electrum. I'll assume Electrum is up and running with the Trezor wallet loaded to make things simple.
Like with TrezorCTL, Electrum feeds on a JSON file, but unlike TrezorCTL it needs that JSON squished into the command line. This is a simple sed command, but I won't bore you with the details, but just assume that's done. So the process in Electrum (v4.0.3) looks like:
  1. electrum serialize (create psbt to sign)
  2. electrum --wallet signtransaction (sign said psbt)
Still pretty simple right! Below is the JSON I smushed for #1

txn.json

{ "inputs": [{ "prevout_hash":"e294c4c172c3d87991b0369e45d6af8584be92914d01e3060fad1ed31d12ff00", "prevout_n": 0, "value_sats": 129999867 }], "outputs": [{ "address": "2MsiAgG5LVDmnmJUPnYaCeQnARWGbGSVnr3", "value_sats": 10000000 },{ "address": "tb1q9l0rk0gkgn73d0gc57qn3t3cwvucaj3h8wtrlu", "value_sats": 20000000 },{ "address": "tb1qejqxwzfld7zr6mf7ygqy5s5se5xq7vmt96jk9x", "value_sats": 99999694 }]}

HWI

HWI is an unsung hero in my book. It's a very small clean and simple interface between HW wallets and Bitcoin Core. It currently supports a good range of HW wallets. It keeps itself narrowly focused on TXN signing and offloads most everything else to Bitcoin Core. Again, I'll assume you've imported your Trezor keypool into Core and done the requisite IBD and rescan. And if you don't have the RPC enabled, you can always clone these commands into the QT-console.
To sign our TXN in HWI (v1.1.2), we will first need to craft (and finalize) it in Bitcoin Core (0.21.1). Like in Electrum, we will have to use simple sed to smush some JSON into command arguments, but I'll assume you have that covered. It will take an inputs.json and an outputs.json named separately.
  1. bitcoin-cli createpsbt (create psbt)
  2. bitcoin-cli -rpcwallet= walletprocesspsbt (process psbt)
  3. hwi -f signtx (sign psbt)
  4. bitcoin-cli -rpcwallet= finalizepsbt (get a signed TXN from psbt)
A little more involved, but still nothing too bad. Plus this gives you the full power of Bitcoin Core including integrations with LND (lightning).

inputs.json

[{ "txid": "e294c4c172c3d87991b0369e45d6af8584be92914d01e3060fad1ed31d12ff00", "vout": 0 }]

outputs.json

[{ "2MsiAgG5LVDmnmJUPnYaCeQnARWGbGSVnr3": 0.10000000 },{ "tb1q9l0rk0gkgn73d0gc57qn3t3cwvucaj3h8wtrlu": 0.20000000 },{ "tb1qejqxwzfld7zr6mf7ygqy5s5se5xq7vmt96jk9x": 0.99999694 }]

Conclusion

This may all seem like very low level coding, but is surprisingly simple once you get a knack for it. Whats more, all these platforms support testnet which allows you to practice with valueless coins until you get the hang of it. And, like many things in bitcoin, this is all (mostly) python, which is one of the easier languages to learn.
Enjoy
Footnotes
1 - https://github.com/trezotrezor-firmware/issues/1296
submitted by brianddk to TREZOR [link] [comments]

How To Set Up a Firewall Using FirewallD on CentOS 7

The majority of this definition is actually metadata. You will want to change the short name for the service within the tags. This is a human-readable name for your service. You should also add a description so that you have more information if you ever need to audit the service. The only configuration you need to make that actually affects the functionality of the service will likely be the port definition where you identify the port number and protocol you wish to open. This can be specified multiple times.
For our “example” service, imagine that we need to open up port 7777 for TCP and 8888 for UDP. By entering INSERT mode by pressing i , we can modify the existing data center in moldova definition with something like this:
/etc/firewalld/services/example.xml
  Example Service This is just an example service. It probably shouldn't be used on a real system.    
Press ESC , then enter :x to save and close the file.
Reload your firewall to get access to your new service:
sudo firewall-cmd --reload 
You can see that it is now among the list of available services:
firewall-cmd --get-services outputRH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry dropbox-lansync elasticsearch example freeipa-ldap freeipa-ldaps freeipa-replication freeipa-trust ftp ganglia-client ganglia-master high-availability http https imap imaps ipp ipp-client ipsec iscsi-target kadmin kerberos kibana klogin kpasswd kshell ldap ldaps libvirt libvirt-tls managesieve mdns mosh mountd ms-wbt mssql mysql nfs nrpe ntp openvpn ovirt-imageio ovirt-storageconsole ovirt-vmconsole pmcd pmproxy pmwebapi pmwebapis pop3 pop3s postgresql privoxy proxy-dhcp ptp pulseaudio puppetmaster quassel radius rpc-bind rsh rsyncd samba samba-client sane sip sips smtp smtp-submission smtps snmp snmptrap spideroak-lansync squid ssh synergy syslog syslog-tls telnet tftp tftp-client tinc tor-socks transmission-client vdsm vnc-server wbem-https xmpp-bosh xmpp-client xmpp-local xmpp-server 
You can now use this service in your zones as you normally would.

Creating Your Own Zones

While the predefined zones will probably be more than enough for most users, it can be helpful to define your own zones that are more descriptive of their function server management in romania.
For instance, you might want to create a zone for your web server, called “publicweb”. However, you might want to have another zone configured for the DNS service you provide on your private network. You might want a zone called “privateDNS” for that.
When adding a zone, you must add it to the permanent firewall configuration. You can then reload to bring the configuration into your running session. For instance, we could create the two zones we discussed above by typing:
sudo firewall-cmd --permanent --new-zone=publicweb 
sudo firewall-cmd --permanent --new-zone=privateDNS
You can verify that these are present in your permanent configuration by typing:
sudo firewall-cmd --permanent --get-zones outputblock dmz drop external home internal privateDNS public publicweb trusted work 
As stated before, these won’t be available in the current instance of the firewall yet:
firewall-cmd --get-zones outputblock dmz drop external home internal public trusted work 
Reload the firewall to bring these new zones into the active configuration:
sudo firewall-cmd --reload 
firewall-cmd --get-zones outputblock dmz drop external home internal privateDNS public publicweb trusted work
Now, you can begin assigning the appropriate services and ports to your zones. It’s usually a good idea to adjust the web hosting in moldova active instance and then transfer those changes to the permanent configuration after testing. For instance, for the “publicweb” zone, you might want to add the SSH, HTTP, and HTTPS services:
sudo firewall-cmd --zone=publicweb --add-service=ssh 
sudo firewall-cmd --zone=publicweb --add-service=http sudo firewall-cmd --zone=publicweb --add-service=https sudo firewall-cmd --zone=publicweb --list-all outputpublicweb target: default icmp-block-inversion: no interfaces: sources: services: ssh http https ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
Likewise, we can add the DNS service to our “privateDNS” zone:
sudo firewall-cmd --zone=privateDNS --add-service=dns 
sudo firewall-cmd --zone=privateDNS --list-all outputprivateDNS interfaces: sources: services: dns ports: masquerade: no forward-ports: icmp-blocks: rich rules:
We could then change iaas platform in romania our interfaces over to these new zones to test them out:
sudo firewall-cmd --zone=publicweb --change-interface=eth0 
sudo firewall-cmd --zone=privateDNS --change-interface=eth1
At this point, you have the opportunity to test your configuration. If these values work for you, you will want to add the same rules to the permanent configuration. You can do that by re-applying the rules with the --permanent flag:
sudo firewall-cmd --zone=publicweb --permanent --add-service=ssh 
sudo firewall-cmd --zone=publicweb --permanent --add-service=http sudo firewall-cmd --zone=publicweb --permanent --add-service=https sudo firewall-cmd --zone=privateDNS --permanent --add-service=dns
After permanently applying these your rules, you can restart your hourly kvm vps in europe network and reload your firewall service:
sudo systemctl restart network 
sudo systemctl reload firewalld
Validate that the correct zones were assigned:
firewall-cmd --get-active-zones outputprivateDNS interfaces: eth1 publicweb interfaces: eth0 
And validate that the appropriate services are available for both of the zones:
sudo firewall-cmd --zone=publicweb --list-services outputhttp https ssh sudo firewall-cmd --zone=privateDNS --list-services outputdns 
You have successfully set up your dedicated server in romania! If you want to make one of these zones the default for other interfaces, remember to configure that behavior with the --set-default-zone= parameter:
sudo firewall-cmd --set-default-zone=publicweb 

Conclusion

You should now have a windows remote desktop fairly good understanding of how to administer the firewalld service on your CentOS system for day-to-day use.
The firewalld service allows you to configure reseller kvm vps program maintainable rules and rule-sets that take into consideration your network environment. It allows you to seamlessly transition between different firewall policies through the use of zones and gives administrators the ability to abstract the port management into more friendly service definitions. Acquiring a working knowledge of this system will allow you to take advantage of the kvm virtual server flexibility and power that this tool provides.
submitted by namemk to u/namemk [link] [comments]

⚡ Lightning Network Megathread ⚡

Last updated 2018-01-29
This post is a collaboration with the Bitcoin community to create a one-stop source for Lightning Network information.
There are still questions in the FAQ that are unanswered, if you know the answer and can provide a source please do so!

⚡What is the Lightning Network? ⚡

Explanations:

Image Explanations:

Specifications / White Papers

Videos

Lightning Network Experts on Reddit

  • starkbot - (Elizabeth Stark - Lightning Labs)
  • roasbeef - (Olaoluwa Osuntokun - Lightning Labs)
  • stile65 - (Alex Akselrod - Lightning Labs)
  • cfromknecht - (Conner Fromknecht - Lightning Labs)
  • RustyReddit - (Rusty Russell - Blockstream)
  • cdecker - (Christian Decker - Blockstream)
  • Dryja - (Tadge Dryja - Digital Currency Initiative)
  • josephpoon - (Joseph Poon)
  • fdrn - (Fabrice Drouin - ACINQ )
  • pmpadiou - (Pierre-Marie Padiou - ACINQ)

Lightning Network Experts on Twitter

  • @starkness - (Elizabeth Stark - Lightning Labs)
  • @roasbeef - (Olaoluwa Osuntokun - Lightning Labs)
  • @stile65 - (Alex Akselrod - Lightning Labs)
  • @bitconner - (Conner Fromknecht - Lightning Labs)
  • @johanth - (Johan Halseth - Lightning Labs)
  • @bvu - (Bryan Vu - Lightning Labs)
  • @rusty_twit - (Rusty Russell - Blockstream)
  • @snyke - (Christian Decker - Blockstream)
  • @JackMallers - (Jack Mallers - Zap)
  • @tdryja - (Tadge Dryja - Digital Currency Initiative)
  • @jcp - (Joseph Poon)
  • @alexbosworth - (Alex Bosworth - yalls.org)

Medium Posts

Learning Resources

Books

Desktop Interfaces

Web Interfaces

Tutorials and resources

Lightning on Testnet

Lightning Wallets

Place a testnet transaction

Altcoin Trading using Lightning

  • ZigZag - Disclaimer You must trust ZigZag to send to Target Address

Lightning on Mainnet

Warning - Testing should be done on Testnet

Atomic Swaps

Developer Documentation and Resources

Lightning implementations

  • LND - Lightning Network Daemon (Golang)
  • eclair - A Scala implementation of the Lightning Network (Scala)
  • c-lightning - A Lightning Network implementation in C
  • lit - Lightning Network node software (Golang)
  • lightning-onion - Onion Routed Micropayments for the Lightning Network (Golang)
  • lightning-integration - Lightning Integration Testing Framework
  • ptarmigan - C++ BOLT-Compliant Lightning Network Implementation [Incomplete]

Libraries

Lightning Network Visualizers/Explorers

Testnet

Mainnet

Payment Processors

  • BTCPay - Next stable version will include Lightning Network

Community

Slack

IRC

Slack Channel

Discord Channel

Miscellaneous

⚡ Lightning FAQs ⚡

If you can answer please PM me and include source if possible. Feel free to help keep these answers up to date and as brief but correct as possible
Is Lightning Bitcoin?
Yes. You pick a peer and after some setup, create a bitcoin transaction to fund the lightning channel; it’ll then take another transaction to close it and release your funds. You and your peer always hold a bitcoin transaction to get your funds whenever you want: just broadcast to the blockchain like normal. In other words, you and your peer create a shared account, and then use Lightning to securely negotiate who gets how much from that shared account, without waiting for the bitcoin blockchain.
Is the Lightning Network open source?
Yes, Lightning is open source. Anyone can review the code (in the same way as the bitcoin code)
Who owns and controls the Lightning Network?
Similar to the bitcoin network, no one will ever own or control the Lightning Network. The code is open source and free for anyone to download and review. Anyone can run a node and be part of the network.
I’ve heard that Lightning transactions are happening “off-chain”…Does that mean that my bitcoin will be removed from the blockchain?
No, your bitcoin will never leave the blockchain. Instead your bitcoin will be held in a multi-signature address as long as your channel stays open. When the channel is closed; the final transaction will be added to the blockchain. “Off-chain” is not a perfect term, but it is used due to the fact that the transfer of ownership is no longer reflected on the blockchain until the channel is closed.
Do I need a constant connection to run a lightning node?
Not necessarily,
Example: A and B have a channel. 1 BTC each. A sends B 0.5 BTC. B sends back 0.25 BTC. Balance should be A = 0.75, B = 1.25. If A gets disconnected, B can publish the first Tx where the balance was A = 0.5 and B = 1.5. If the node B does in fact attempt to cheat by publishing an old state (such as the A=0.5 and B=1.5 state), this cheat can then be detected on-chain and used to steal the cheaters funds, i.e., A can see the closing transaction, notice it's an old one and grab all funds in the channel (A=2, B=0). The time that A has in order to react to the cheating counterparty is given by the CheckLockTimeVerify (CLTV) in the cheating transaction, which is adjustable. So if A foresees that it'll be able to check in about once every 24 hours it'll require that the CLTV is at least that large, if it's once a week then that's fine too. You definitely do not need to be online and watching the chain 24/7, just make sure to check in once in a while before the CLTV expires. Alternatively you can outsource the watch duties, in order to keep the CLTV timeouts low. This can be achieved both with trusted third parties or untrusted ones (watchtowers). In the case of a unilateral close, e.g., you just go offline and never come back, the other endpoint will have to wait for that timeout to expire to get its funds back. So peers might not accept channels with extremely high CLTV timeouts. -- Source
What Are Lightning’s Advantages?
Tiny payments are possible: since fees are proportional to the payment amount, you can pay a fraction of a cent; accounting is even done in thousandths of a satoshi. Payments are settled instantly: the money is sent in the time it takes to cross the network to your destination and back, typically a fraction of a second.
Does Lightning require Segregated Witness?
Yes, but not in theory. You could make a poorer lightning network without it, which has higher risks when establishing channels (you might have to wait a month if things go wrong!), has limited channel lifetime, longer minimum payment expiry times on each hop, is less efficient and has less robust outsourcing. The entire spec as written today assumes segregated witness, as it solves all these problems.
Can I Send Funds From Lightning to a Normal Bitcoin Address?
No, for now. For the first version of the protocol, if you wanted to send a normal bitcoin transaction using your channel, you have to close it, send the funds, then reopen the channel (3 transactions). In future versions, you and your peer would agree to spend out of your lightning channel funds just like a normal bitcoin payment, allowing you to use your lightning wallet like a normal bitcoin wallet.
Can I Make Money Running a Lightning Node?
Not really. Anyone can set up a node, and so it’s a race to the bottom on fees. In practice, we may see the network use a nominal fee and not change very much, which only provides an incremental incentive to route on a node you’re going to use yourself, and not enough to run one merely for fees. Having clients use criteria other than fees (e.g. randomness, diversity) in route selection will also help this.
What is the release date for Lightning on Mainnet?
Lightning is already being tested on the Mainnet Twitter Link but as for a specific date, Jameson Lopp says it best
Would there be any KYC/AML issues with certain nodes?
Nope, because there is no custody ever involved. It's just like forwarding packets. -- Source
What is the delay time for the recipient of a transaction receiving confirmation?
Furthermore, the Lightning Network scales not with the transaction throughput of the underlying blockchain, but with modern data processing and latency limits - payments can be made nearly as quickly as packets can be sent. -- Source
How does the lightning network prevent centralization?
Bitcoin Stack Exchange Answer
What are Channel Factories and how do they work?
Bitcoin Stack Exchange Answer
How does the Lightning network work in simple terms?
Bitcoin Stack Exchange Answer
How are paths found in Lightning Network?
Bitcoin Stack Exchange Answer
How would the lightning network work between exchanges?
Each exchange will get to decide and need to implement the software into their system, but some ideas have been outlined here: Google Doc - Lightning Exchanges
Note that by virtue of the usual benefits of cost-less, instantaneous transactions, lightning will make arbitrage between exchanges much more efficient and thus lead to consistent pricing across exchange that adopt it. -- Source
How do lightning nodes find other lightning nodes?
Stack Exchange Answer
Does every user need to store the state of the complete Lightning Network?
According to Rusty's calculations we should be able to store 1 million nodes in about 100 MB, so that should work even for mobile phones. Beyond that we have some proposals ready to lighten the load on endpoints, but we'll cross that bridge when we get there. -- Source
Would I need to download the complete state every time I open the App and make a payment?
No you'd remember the information from the last time you started the app and only sync the differences. This is not yet implemented, but it shouldn't be too hard to get a preliminary protocol working if that turns out to be a problem. -- Source
What needs to happen for the Lightning Network to be deployed and what can I do as a user to help?
Lightning is based on participants in the network running lightning node software that enables them to interact with other nodes. This does not require being a full bitcoin node, but you will have to run "lnd", "eclair", or one of the other node softwares listed above.
All lightning wallets have node software integrated into them, because that is necessary to create payment channels and conduct payments on the network, but you can also intentionally run lnd or similar for public benefit - e.g. you can hold open payment channels or channels with higher volume, than you need for your own transactions. You would be compensated in modest fees by those who transact across your node with multi-hop payments. -- Source
Is there anyway for someone who isn't a developer to meaningfully contribute?
Sure, you can help write up educational material. You can learn and read more about the tech at http://dev.lightning.community/resources. You can test the various desktop and mobile apps out there (Lightning Desktop, Zap, Eclair apps). -- Source
Do I need to be a miner to be a Lightning Network node?
No -- Source
Do I need to run a full Bitcoin node to run a lightning node?
lit doesn't depend on having your own full node -- it automatically connects to full nodes on the network. -- Source
LND uses a light client mode, so it doesn't require a full node. The name of the light client it uses is called neutrino
How does the lightning network stop "Cheating" (Someone broadcasting an old transaction)?
Upon opening a channel, the two endpoints first agree on a reserve value, below which the channel balance may not drop. This is to make sure that both endpoints always have some skin in the game as rustyreddit puts it :-)
For a cheat to become worth it, the opponent has to be absolutely sure that you cannot retaliate against him during the timeout. So he has to make sure you never ever get network connectivity during that time. Having someone else also watching for channel closures and notifying you, or releasing a canned retaliation, makes this even harder for the attacker. This is because if he misjudged you being truly offline you can retaliate by grabbing all of its funds. Spotty connections, DDoS, and similar will not provide the attacker the necessary guarantees to make cheating worthwhile. Any form of uncertainty about your online status acts as a deterrent to the other endpoint. -- Source
How many times would someone need to open and close their lightning channels?
You typically want to have more than one channel open at any given time for redundancy's sake. And we imagine open and close will probably be automated for the most part. In fact we already have a feature in LND called autopilot that can automatically open channels for a user.
Frequency will depend whether the funds are needed on-chain or more useful on LN. -- Source
Will the lightning network reduce BTC Liquidity due to "locking-up" funds in channels?
Stack Exchange Answer
Can the Lightning Network work on any other cryptocurrency? How?
Stack Exchange Answer
When setting up a Lightning Network Node are fees set for the entire node, or each channel when opened?
You don't really set up a "node" in the sense that anyone with more than one channel can automatically be a node and route payments. Fees on LN can be set by the node, and can change dynamically on the network. -- Source
Can Lightning routing fees be changed dynamically, without closing channels?
Yes but it has to be implemented in the Lightning software being used. -- Source
How can you make sure that there will be routes with large enough balances to handle transactions?
You won't have to do anything. With autopilot enabled, it'll automatically open and close channels based on the availability of the network. -- Source
How does the Lightning Network stop flooding nodes (DDoS) with micro transactions? Is this even an issue?
Stack Exchange Answer

Unanswered Questions

How do on-chain fees work when opening and closing channels? Who pays the fee?
How does the Lightning Network work for mobile users?
What are the best practices for securing a lightning node?
What is a lightning "hub"?
How does lightning handle cross chain (Atomic) swaps?

Special Thanks and Notes

  • Many links found from awesome-lightning-network github
  • Everyone who submitted a question or concern!
  • I'm continuing to format for an easier Mobile experience!
submitted by codedaway to Bitcoin [link] [comments]

Unitimes AMA | Danger in Blockchain, Data Protection is Necessary

Unitimes AMA | Danger in Blockchain, Data Protection is Necessary
https://preview.redd.it/22zrdwgeg3m31.jpg?width=1280&format=pjpg&auto=webp&s=1370c511afa85ec06cda6843c36aa9289456806d
At 10:30 on September 12, Unitimes held the 40th online AMA about blockchain technologies and applications. We were glad to have Joanes Espanol , CEO and CTO of Amberdata, to share with us on ‘’Danger in Blockchain, Data Protection is Necessary‘’ . The AMA is composed of two parts : Fixed Q&A and Free Q&A. Check out the details below!

Fixed Q&A

  1. Please introduce yourself and Amberdata
Hi everybody, my name is Joanes Espanol and I am co-founder and CTO of Amberdata. Prior to founding Amberdata, I have worked on several large scale ingestion pipelines, distributed systems and analytics platforms, with a focus on infrastructure automation and highly available systems. I am passionate about information retrieval and extracting meaning from data.
Amberdata is a blockchain and digital asset company which combines validated blockchain and market data from the top crypto exchanges into a unified platform and API, enabling customers to operate with confidence and build real-time data-powered applications.
  1. What type of data does the API provide?
The advantage and uniqueness of Amberdata’s API is the combination of blockchain and pricing data together in one API call.
We provide a standardized way to access blockchain data (blocks, transactions, account information, etc) across different blockchain models like UTXO (Bitcoin, Litecoin, Dash, Zcash...) and Account Based (Ethereum...), with contextualized pricing data from the top crypto exchanges in one API call. If you want to build applications on top of different blockchains, you would have to learn the intricacies of each distributed ledgers, run multiple nodes, aggregate the data, etc - instead of spending all that time and money, you can start immediately by using the APIs that we provide.
What can you get access to? Accounts, account-balances, blocks, contracts, internal messages, logs and events, pending transactions, security audits, source code, tokens, token balances, token transfers, token supplies (circulating & total supplies), transactions as well as prices, order books, trades, tickers and best bid and offers for about 2,000 different assets.
One important thing to note is that most of the APIs return validated data that anybody can verify by themselves. Blockchain is all about trust - operating in a hostile and trustless environment, maintaining consensus while continuously under attack, etc - and we want to make sure that we maintain that level of trust, so the API returns all the information that you would need to recalculate Merkle proofs yourself, hence guaranteeing the data was not tampered with and is authentique.
  1. Why is it important to combine blockchain and market data?
Cryptoeconomics plays a key role in the blockchain world. One simple way to explain this is to look at why peer-to-peer file sharing systems like BitTorrent failed. These file sharing protocols were an early form of decentralization, with each node contributing to and participating in this “global sharing computer”. The issue with these protocols is that they relied on the good will of each participant to (re-)share their files - but without economic incentive, or punishment for not following the rules, it opened the door to bad behavior which ultimately led to its demise.
The genius of Satoshi Nakamoto was to combine and improve upon existing decentralized protocols with game theory, to arrive at a consensus protocol able to circumvent the Byzatine’s General Problem. Now participants have incentives to follow the rules (they get financially rewarded for doing so by mining for example, and penalized for misbehaving), which in turn results in a stable system. This was the first time that crypto-economics were used in a working product and this became the base and norm for a lot of the new systems today.
Pricing data is needed as context to blockchain data: there are a lot of (ERC-20) tokens created on Ethereum - it is very easy to clone an existing contract, and configure it with a certain amount of initial tokens (most commonly in the millions and billions in volume). Each token has an intrinsic value, as determined by the law of supply and demand, and as traded on the exchanges. Price fluctuations have an impact on the adoption and usage, meaning on the overall transaction volume (and to a certain extent transaction throughput) on the blockchain.
Blockchain data is needed as context to market data: activity on blockchain can have an impact on market data. For example, one can look at the incoming token transfers in the Ethereum transaction pool and see if there are any impending big transfers for a specific token, which could result in a significant price move on the other end. Being able to detect that kind of movement and act upon it is the kind of signals that traders are looking for. Another example can be found with token supplies: exchanges want to be notified as soon as possible when a token circulating supply changes, as it affects their trading ability, and in the worst case scenario, they would need to halt trading if a token contract gets compromised.
In conclusion, events on the blockchain can influence price, and market events also have an impact on blockchain data: the two are intimately intertwined, and putting them both in context leads to better insights and better decision making.
  1. All the data you provide is publicly available, what gives?
Very true, all this data is publicly available, that is one of the premises and fundamentals of blockchain models, where all the data is public and transparent across all the nodes of the network. The problem is that, even though it is publicly available, it is not quick, not easy and not cheap to access.
Not quick: blockchain data structures were designed and optimized for achieving consensus in a hostile and trustless environment and for internal state management, not for random access and overall search. Imagine you want to list all the transactions that your wallet address has participated in? The only way to do that would be to replay all the transactions from the beginning of time (starting at the genesis block), looking at the to and from addresses and retain only the ones matching your wallet: at over 500 million of transactions as of today, it will take some unacceptable amount of time to retrieve that list for a customer facing application.
Not easy: Some very basic things that one would expect when dealing with financial assets and instruments are actually very difficult to get at, especially when related to tokens. For example, the current Ether balance of a wallet is easy to retrieve in one call to a Geth or Parity client - however, looking at time series of these balances starts to be a little hairy, as not all historical state is kept by these clients, unless you are running a full archive node. Looking at token holdings and balances gets even more complicated, as most of the token transfers are part of the transient state and not kept on chain. Moreover, token transfers and balance changes over time are triggered by different mechanisms (especially when dealing with contract to contract function calls), and detecting these changes accurately is prone to errors.
Not cheap: As mentioned above, most of the historical data and time series metrics are only available via a full archive node, which at the time of writing requires about 3TB of disk space, just to hold all the blockchain state - and remember, this state is in a compressed and not easily accessible format. To convert it to a more searchable format requires much more space. Also, running your own full archive node requires constant care, maintenance and monitoring, which has become very expensive and prohibitive to run.
  1. Who uses your API today and what do they do with it?
A wide variety of applications and projects are using our API, across different industries ranging from wallets and trust funds (DappRadar), to accounting and arbitrage firms (Moremath), including analytics (Stratcoins) and compliance & security companies (Blue Swan). Amberdata’s API is attractive to many different people because it is very complete and fast, and it provides additional data enrichment not available in other APIs, and because of these, it appeals to and fits nicely with our customers use cases:
· It can be used in the traditional REST way to augment your own processes or enrich your own data with hard to get pieces of information. For example, lots of our users retrieve historical information (blocks and transactions) and relay it in their applications to their own customers, while others are more interested in financial data (account & token balances) and time series for portfolio management.
https://medium.com/amberdata/keep-it-dry-use-amberdatas-api-9cdb222a41ba
· Other projects are more in need of real-time up-to-date data, for which we recommend using our websockets, so you can filter out data in real-time and match your exact needs, rather than getting the firehose of information and having to filter out and discard 99% of it.
· We have a few research projects tapping into our API as well. For example, some of our customers want access to historical market data to backtest their trading strategies and fine-tune their own algorithms.
· Our API is also fully Json RPC compliant, meaning some people use it as a drop-in replacement for their own node, or as an alternative to Infura for example. We have some customers using both Amberdata and Infura as their web3 providers, with the benefits of getting additional enriched data when connecting to our API.
· And finally, we have also built an SDK on top of the API itself, so it is easier to integrate into your own application (https://www.npmjs.com/package/web3data-js).
We also have several subscriptions to match your needs. The developer tier is free and gets you access to 90% of all the data. If you are not sure about your usage patterns yet, we recommend the on-demand plan to get started, while for heavy users the professional and enterprise plans would be more adequate - see https://amberdata.io/pricing for more information.
All and all, we try really hard to make it as easy as possible to use for you. We do the heavy lifting, so you don’t have to worry about all the minutia and you can focus on bringing value to your customers. We work very closely with our customers and continuously improve upon and add new features to our API. If something is not supported or you want something that is not in the API, chances are we already have the data, do not hesitate to ask us ;)
  1. Amberdata recently made some headlines for discovering a vulnerability on Parity client. Can you tell us a bit more about it?
This is an interesting one. One of our internal processes flagged a contract, and more specifically the balanceOf(...) call: it was/is taking more than 5 seconds to execute (while typically this call takes only a few milliseconds). While investigating further, we started looking at the debug traces for that contract call and were pretty surprised when a combination of trace_call+vmTrace crashed our Parity node - and not just randomly, the same call would exhibit the exact same behavior each time, and on different Parity nodes. It turns out that this contract is very poorly written, and the implementation of balanceOf(...) keeps on looping over all the holders of the token, which eventually runs out of memory.
Even though this is a pretty severe bug (any/all Parity node(s) can be remotely shutdown with just one small call to its API), in practice the number of nodes at risk is probably small because only operators who have enabled public facing RPC calls (and possibly the ones who have enabled tracing as well) are affected - which are both disabled by default. Kudos to the Parity team for fixing and releasing a patch in less than 24 hours after the bug was reported!
  1. How do you access the data? How do I get started?
We sometimes get the question, “I do not know how to code, can I still use your data?”, and it is possible! We have built a few dashboards on our platform, and you can visualize and monitor different metrics, and get alerts: https://amberdata.io/dashboards/infrastructure.
A good starting point is to use our Postman collection, which is pretty complete and can give you a very good overview of all the capabilities: https://amberdata.io/docs/libraries and https://www.getpostman.com/collections/79afa5bafe91f0e676d6.
For more advanced users, the REST API is where you should start, but as I mentioned earlier, how to access the data depends on your use case: REST, websockets, Json RPC and SDK are the most commonly ways of getting to it. We have a lot of tutorials and code examples available here: https://amberdata.io/docs.
For developers interested in getting access to Amberdata’s blockchain and market data from within their own contract, they can use the Chainlink Oracle contract, which integrates directly with the API:
https://medium.com/amberdata/smart-contract-oracles-with-amberdata-io-358c2c422d8a
  1. Amberdata just recently celebrated 2 years birthday. What is your proudest accomplishment? Any mistake/lesson you would like to share with us?
The blockchain and crypto market is one of the fastest evolving and innovating markets ever, and a very fast paced environment. Having been heads down for two years now, it is sometimes easy to lose sight of the big picture. The journey has been long, but I am happy and proud to see it all come together: we started with blockchain data and monitoring/alerting, added search, validation and derived data (tokens, supplies, etc) along the way, and finally market data to close the loop on all the cryptoeconomics. Seeing the overall engagement from the community around our data is very gratifying: API usage climbing up, more and more pertinent and relevant questions/suggestions on our support channels, other projects like Kadena sending us their own blockchain data so it can be included in Amberdata’s offering… all of these makes me want to do more :)

Free Q&A

---Who are your competitors? What makes you better?
There are a few data providers out there offering similar information as Amberdata. For example, Etherscan has very complete blockchain data for Ethereum, and CoinmarketCap has assets rankings by market cap and some pricing information. We actually did a pretty thorough analysis on the different data providers and they pros and cons:
https://medium.com/amberdata/which-blockchain-data-api-is-right-for-you-3f3758efceb1
What makes Amberdata unique is three folds:
· Combination of blockchain and market data: typically other providers offer one or the other, but not both, and not integrated with each other - with Amberdata, in one API call I can get blockchain and historically accurate pricing data at the same time. We have also standardized access across multiple blockchains, so you get one interface for all and do not have to worry about understanding each and every one of them.
· Validated & verifiable data: we work hard to preserve transparency and trust and are very open about how our metrics are calculated. For example, blockchain data comes with all the pieces needed to recompute the Mekle proofs so the integrity of the data can be verified at any moment. Also, additional metrics like circulating supply are based on tangible and very concrete definitions so anybody can follow and recalculate them by themselves if needed.
· Enriched data: we have spent a lot of time enriching our APIs with (historical) off chain data like token names and symbols, mappings for token addresses and tradable market pairs, etc. At the same time, our APIs are very granular and provide a level of detail that only a few other providers offer, especially with market data (Level 2 with order books across multiple exchanges, Best Bid Offers, etc).
That's all for the 40th AMA. We should like to thank all the community members for their participation and cooperation! Thanks, Joanes!
submitted by Unitimes_ to u/Unitimes_ [link] [comments]

jl777 informal Telegram chat 9/AUG/18. Topics include Crypto Condition smart contracts, ERC20 token migration, scaling, and marketing.

TL:DR will be in comments
This is an informal discussion with jl777 and some Komodo team members and the community on Telegram. I've edited some of the grammar and structure, combined shorter messages into paragraphs to help readability, and removed messages that weren't relevant to the discussion or had already been answered. Full text is visible on Komodo telegram, or Komodo Discord (telegram channel)
---
N 21: Mr. Please when smart contracts?
jl777: first four reference contracts are pretty much working now, likely some bug fixes needed but assets, faucet, rewards and dice CC contracts are functional
Firedragon: Ethereum start the year 2017 around 7 dollars, end the year x 200 times at around *1400*, user friendliness leads to giant expansion.
jl777: Sure, no disagreement, but code snippets don’t seem very user friendly. Unless you mean having cartoon characters alongside the code snippets makes it user friendly?
jl777: Having non-coders doing code is dangerous, no matter if it has cartoons.
Firedragon: enables expansion at scale.
jl777: Until you get an actual dApp that is popular and the entire chain bogs down and TX fees goes to $5+
Firedragon: it is dangerous, but once run into problem, will likely seek help, but important, easy to start.
jl777: With the CC contracts, it is a matter to issue rpc calls to start a contract. So it doesn’t require coding, but if your use case isn’t covered by an existing CC contract, then a custom CC contract needs to be written. It is similar to ASIC mining vs CPU mining. More work to make the ASIC, but once created it can do all the related things much more efficiently.
jl777: Assets with DEX, faucet, rewards and dice CC contracts I made in a month. We are in the process of hiring a dedicated CC contracts dev, they are not that much work to code but it does require to be coded and I only did the cli rpc calls, not GUI.
Firedragon: komodo solve the scalability of Cryptokittens that Ethereum bog down with. I believe Ethereum solve the usability issues. if only combine the two.
jl777: Yes, KMD solves the Cryptokitties scalability issue, but someone has to write the Cryptokitties CC contract which is mostly independent of the blockchain as it has the kitty’s DNA logic that seems to be most of it.
Jakub Alex: Jl777 what is your opinion about future price after bear market? What price levels can we expect with komodo? 50-100? 100-200 or more? Are we able to be in top 15 m cap coins?
jl777: Future KMD price depends on overall crypto market cap, if you can predict that, then I can predict a future KMD price. Historically KMD is a bit under 0.1% of overall crypto market cap. Assuming KMD price rises in CMC to top 10 level, then it would be closer to 1% of overall crypto market cap.
jl777: So really, just 2 variables. What overall rank on CMC and the total crypto market cap.
Chris Fray: Staking Is the best way u come back in a year collect your 5% and do it all over again don't panic sell.
jl777: It will be monthly starting very soon.
The Fonz: How many nodes are there currently? Running the network? is there a real time link to a list?
jl777: There is no way to know for sure, I would estimate about 1000
Rubinho: How do you run a node on KMD? Just by staking tokens?
jl777: Running a native wallet, runs a node.
Rubinho: Who is the intended end user for KMD? b2c? b2b? both? what is the state of partnerships?
The Fonz: Minimum KMD required?
Regnar: To run a node? No minimum required, but to earn 5.1% rewards on your balance, you need a minimum of 10 KMD in an address, the simplest way to do this is to store funds in Agama (even the lite mode) and press the claim interest button.
The Fonz: gotcha, thx.
Regnar: Probably best to check out the discord marketing channel for updates and talks on that, telegram generally gets cluttered quickly with other conversations. There are definitely partnerships being worked on, but I think it would be better to wait for official announcements on them, so I won't disclose or help push any rumors there. Here's the link.https://discord.gg/5WtTNKX
Regnar: jl777, I assume ERC20 projects who are looking to migrate off of Ethereum, or launch their main net, would be interested in Komodo as CC becomes more developed. Can you say what that migrating/main net process might look like for those projects?
jl777c: the process would be to get a new chain spawned to test the custom contract with and when it is working,to just migrate the ERC20 snapshot to the new chain. The CC framework is ready to be coded to and devs can create a custom contract embedded in their chain. txfees in their chain, only tx from their dapp on their chain.
Regnar: And I assume before they migrate to their new chain they can have all the features and things in place to be atomic swap capable, set up for dPoW, and would have the same scaling tech Komodo uses?
jl777: yes, all can be put in place before doing the snapshot and migrating, and everything other than enabling dPoW can be done without our being involved, though we are happy to help.
Regnar: Okay so the dPoW part requires Komodo involvement, and GameCredits, Kreds, Utrum, and some other coins have added dPoW to their chains for the extra security. Is this process getting easier to do for established coins that aren't looking to migrate?
jl777: as far as if KMD will survive this bear market, I am confident KMD will survive the next bear market and even the one after that. We have funding for 10+ years of dPoW and while we do have end user deliverables, that is a very expensive marketing wise for customer acquisition. Our focus on enabling technologies allows us to run much leaner on the marketing side. and in any case, we don’t have millions of dollars to be throwing about left and right for paid placements and listings like many of the other projects do. It will take longer, but as more people learn about KMD, the mentions we will get will be based on KMD attributes and not due to paid placements.
Rubinho: The main focus for many platforms is to attract devs.. i think the perception of the numbers of devs out is over estimated... what would you consider to be a "possible application" that can be built on KMD for the benefit of the end user? Again I own KMD, I just want to have a better grasp of KMD's competitive edge, is it only TPS? And if there isn't a lot of money for paid placements... which is understandable... how would KMD be better distributed in more hands?
jl777: I wrote 4 reference contracts last month: assets with DEX, faucet, rewards and dice. Once written they can be configured without programming, each CC contract that is written becomes part of the baseline contracts available for all chains, and the rewards CC contract implements what seems to be what is most liked about masternodes.
xRobeSx: Assetchains / dPoW / jumblr / 5% interest... the list goes on and on. The fact that anyone can spawn a new chain in seconds that has a built in faucet, is pretty damn cool.
The Fonz: This link is helpful as well https://www.reddit.com/komodoplatform/comments/8gyajv/welcome_to_komodo_a_beginners_guide/
jl777: Assets/tokens are also pretty important and dice is a classic blockchain game, all these are blockchain enforced trustless implementations
Steve Lee: I'd recommend reading https://komodoplatform.com/komodo-evolution-the-5-pillars-of-blockchain-tech/ and the 5 deep dive posts linked in the article.
Rubinho: how would that be beneficial in a real life scenario? what would be your prediction for the future in terms of the numbers of chains or platforms? Many or few winners?
xRobeSx: The first few years of bitcoin, was a lot of faucets and dice games :smiley: now anyone can do it in minutes ha.
Steve Lee: We're working with https://www.ideasbynature.com/ for a full rebrand and also refreshing the UX/UI across our product portfolio. Ideas By Nature is the world’s leading agency focused solely on the design and development of blockchain products. Located in the heart of Denver, CO.
jl777: these are just the reference CC contracts, the possibilities are endless as to what can be implemented. Basically anything you can describe coherently in detail can be made into a CC contract.
Steve Lee: We've spent the last year building the most securely scalable and interoperable blockchain infrastructure. We find this was critical to first focus on building the right foundation for our ecosystem and ensuring it was future-proof and to address many of the limitations we're seeing today.
Χαίρετε: UX/UI are better comparing with the old KMD apps, but it's still far from user friendly, the standards have to be much, much higher, otherwise I don't see much adoptions
J: I want to see ethLend style CC system in KMD
Rubinho: btw... I own KMD on ledger… will there be a way to gain the interest without having to move it to another wallet? I mean to claim. I don't want to take my private keys and put them into another wallet.
Steve Lee: This is our current interim solution. We're exploring full wallet integration with the Ledger team. https://www.youtube.com/watch?v=nKBdGI8pu7M&lc=z22audyhlvnrjpdwq04t1aokgbvrwe1nz2iz0sbmi0mnrk0h00410
Rubinho: thank u for that!
Χαίρετε: average users access your infrastructure via apps, doesn't matter how good your underline techs are, if apps are hard to use, people will leave. google cloud has better tech than AWS, better engineering team, better talents, but it's losing big against AWS, internally google said it will only fund it 2 more years if things don't turn around, i know number of people who have left that team, so the infrastructure doesn't matter at the end. i think KMD team has this engineering mentality thinking that UX/UI is easy, but the opposite is true, designing a user friendly product is hard, even harder than engineering.
Steve Lee: Agreed, that's why since we've finished building the foundational architecture, we're moving focuses on building out our developer community, documentation, training, and are moving towards development of a smart contract reference library.
Siu: Mm1.0 is a proof of concept. As it is right now it still is ages ahead of similar concepts. It should be jaw dropping even as it currently is. The real problem bdex confronts is the prostitution of the "atomic swap" name by ETH tokens.
Steve Lee: Also in plan is building a GUI frontend for blockchain creation, customization, etc.
jl777: if everything was perfect already, we would be done and have nothing to do. it is a process of continuous improvement
James: Ethereum has solidity. Is there equivalent for Komodo?
jl777: CC contracts are native code running at full speed. Any language that can be compiled into a library can be used, but we are just getting started with making different language bindings and it would need to interface to the c/c++ functions inside the komodod. My thinking is that blockchain coding is difficult and not something end users or casual programmers should be doing, such can and have led to some very expensive errors, even experienced blockchain devs make errors. C++ is the easiest, but any language that compiles would be able to be used.
James: OK. So it is c++
Rubinho: could you walk me thought how am I (the end user) going to use KMD in the system? is KMD like GAS for the platform?
jl777: rust would likely not be too hard to interface to. The CC contracts tend to be configurable, so the user can determine the settings of the rewards and not have to code it, of course a full blockchain project creating their own dapp would need to still create their own dapp, but on their own chain, there is no crazy txfee, congestion, etc
[End Chat]
submitted by regnar2 to komodoplatform [link] [comments]

Serialization: Qtum Quantum Chain Design Document (6): x86 Virtual Machines Reshaping Smart Contract Ecosystem

Qtum Original Design Document Summary (6) -- Qtum x86 Virtual Machine

https://mp.weixin.qq.com/s/0pXoUjXZnqJaAdM4vywvlA
As we mentioned in the previous chapters, Qtum uses a layered design, using Qtum AAL, so that the Ethereum virtual machine EVM can run on the underlying UTXO model to be compatible with Ethereum's smart contracts. However, EVM itself has many limitations, and is currently only compatible with a high-level language such as Solidity for smart contract writing. Its security and maturity still require time verification. The Qtum AAL was designed to be compatible with multiple virtual machines at the beginning of the design, so after the initial compatibility with the EVM, the Qtum team was committed to compatibility with the more mainstream architecture of the virtual machine, which in turn was compatible with mainstream programming languages and toolchains.
The Qtum x86 virtual machine is the development focus of the Qtum project in 2018. It aims to create a virtual machine compatible with the x86 instruction set and provide similar operating system level calls, aiming to push smart contract development into the mainstream.
The following section intercepted some of the original Qtum development team's original design documents for the Qtum x86 virtual machine (with Chinese translation) (ps: document QTUM <#> or QTUMCORE<#> for internal design document numbering):
 
QTUMCORE-103:[x86lib] Add some missing primary opcodes
Description:There are several missing opcodes in the x86 VM right now. For this story, complete the following normal opcodes (it should just be a lot of connecting code, nothing too intense)
//op(0x9C, op_pushf);
//op(0x9D, op_popf);
//op(0xC0, op_group_C0); //186
// C0 group: _rm8_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
//op(0xC1, op_group_C1); //186
// C1 group: _rmW_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
Notes:
• Make sure to look at existing examples of similar code in the VM code.
• Look at the x86 design document references for some good descriptions of each opcode
• Ask earlz directly about any questions
• At the top of opcode_def.h there is a big comment block explaining the opcode function name standard and what things like "rW" mean
• Implement the first opcode listed and then have earlz review to make sure things looks correct
Task: [x86lib] add some missing main operation code
Description: Some opcodes are currently missing from x86 virtual machines. In this task, complete the following standard opcode (should only be some connection code, not too tight)
//op(0x9C, op_pushf);
//op(0x9D, op_popf);
//op(0xC0, op_group_C0); //186
// C0 group: _rm8_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
//op(0xC1, op_group_C1); //186
// C1 group: _rmW_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
note:
• Make sure to see existing similar code examples in VM (virtual machine) code
• View x86 design documents to better understand each opcode
• Ask any question directly to Earlz
• At the top of opcode_def.h, there is a large section of comments explaining the opcode function name criteria and the meaning of keywords such as "rW"
• Implement the first opcode listed, and then let Earlz check to make sure the code looks correct.
QTUMCORE-106: [x86lib] Add some more missing primary opcodes
Description: There are a few missing opcodes in the x86 VM right now. For this story, complete the following normal opcodes (it should just be a lot of connecting code, nothing too intense)
//op(0x60, op_pushaW); //186
//op(0x61, op_popaW); //186
//op(0x6C, op_insb_m8_dx); //186
//op(0x6D, op_insW_mW_dx); //186
//op(0x6E, op_outsb_dx_m8); //186
//op(0x6F, op_outsW_dx_mW); //186
Notes:
• Make sure to look at existing examples of similar code in the VM code.
• Look at the x86 design document references for some good descriptions of each opcode
• Ask earlz directly about any questions
• At the top of opcode_def.h there is a big comment block explaining the opcode function name standard and what things like "rW" mean
• Implement the first opcode listed and then have earlz review to make sure things looks correct
Task: [x86lib] add some missing main operation code
Description: Some opcodes are currently missing from the x86 virtual machine. In this task, complete the following standard opcode (should only be some connection code, not too tight)
//op(0x60, op_pushaW); //186
//op(0x61, op_popaW); //186
//op(0x6C, op_insb_m8_dx); //186
//op(0x6D, op_insW_mW_dx); //186
//op(0x6E, op_outsb_dx_m8); //186
//op(0x6F, op_outsW_dx_mW); //186
note:
• Make sure to see existing similar code examples in VM (virtual machine) code
• View x86 design documents to better understand each opcode
• Ask any question directly to Earlz
• At the top of opcode_def.h, there is a large section of comments explaining the opcode function name criteria and the meaning of keywords such as "rW"
• Implement the first opcode listed, and then let Earlz check to make sure the code looks correct.
QTUMCORE-104: [x86lib] Add some missing extended opcodes
Description: There are several missing opcodes in the x86 VM right now. For this story, complete the following extended (0x0F prefix) opcodes (it should just be a lot of connecting code, nothing too intense)
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAF, op_imul_rW_rmW); //386
Opx(0xB0, op_cmpxchg_rm8_al_r8); //48
Opx(0xB1, op_cmpxchg_rmW_axW_rW); //486
For(int i=0;i<8;i++)
{opx(0xC8 + i, op_bswap_rW); }
Notes:
• Make sure to look at existing examples of similar code in the VM code.
• Look at the x86 design document references for some good descriptions of each opcode
• Ask earlz directly about any questions
• At the top of opcode_def.h there is a big comment block explaining the opcode function name standard and what things like "rW" mean
• Implement the first opcode listed and then have earlz review to make sure things looks correct
Task: [x86lib] Adding Some Missing Extended Opcodes
Description: Some opcodes are currently missing from the x86 virtual machine. In this task, complete the following extension (0x0F prefix) opcode (should be just some connection code, not too tight)
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAF, op_imul_rW_rmW); //386
Opx(0xB0, op_cmpxchg_rm8_al_r8); //48
Opx(0xB1, op_cmpxchg_rmW_axW_rW); //486
For(int i=0;i<8;i++)
{opx(0xC8 + i, op_bswap_rW); }
note:
• Make sure to see the existing similar code example in the virtual machine code
• View x86 design documents to better understand each opcode
• Ask any question directly to Earlz
• At the top of opcode_def.h, there is a large section of comments explaining the opcode function name criteria and the meaning of keywords such as "rW"
• Implement the first opcode listed, and then let Earlz check to make sure the code looks correct.
The above series of tasks implements most of the necessary opcodes for the x86 lib kernel part (x86lib). These are the basics for the virtual machine to recognize and run x86 instructions and function as an emulator for x86 instructions.
QTUMCORE-105: [x86lib] Research how to do automated testing for x86lib
Description: Research and look for viable ways to do automated testing of x86lib's supported opcodes
Task: How to Automatically Test x86lib
Description: Study and find possible ways to automate x86lib supported opcodes
The Qtum team achieved automated testing of the x86 virtual machine kernel through the above tasks, because the parsing and running errors of the underlying instructions are often difficult to find through debugging, and must use some automated testing tools. This ensures the correctness of the x86lib kernel.
QTUMCORE-109:[x86] Add "reason" field for all memory requests
Description: In order to prepare for the upcoming gas model, a new field needs to be added to every memory access. This field basically gives the reason for why memory is being accessed so that it can be given a proper gas cost. Possible reasons:
Code fetching (used for opcode reading, ModRM parsing, immediate arguments, etc)
Data (used for any memory reference in the program, such as mov [1234], eax. also includes things like ModRM::WriteWord() etc)
Internal (used fro any internal memory reading that shouldn't be given a price.. probably not used right now outside of testbench/testsuite code)
This "reason" code can be place in MemorySystem(). It shouldn't go in each individual MemoryDevice object
Task: [x86] Add "reason" field to all memory requests
Description: In preparation for the gas model to be used, a new field needs to be added to each memory access. This field basically gives the reason why the memory was accessed so that the appropriate gas cost can be given.
Possible reasons are:
• Capture code (for opcode reads, ModRMB parsing, instant parameters, etc.)
• Data (used for memory references in programs such as mov[1234], eax, and operations similar to ModRM::WriteWord(), etc.)
Internal request (for any internal memory read that does not need to consume gas... currently only used in testbench/testsuite code)
The "reason" code can be placed in MemorySystem(). It should not be placed in any single MemoryDevice object.
The above task is mainly aimed at the Qtum x86 new gas model, and separate fields are reserved for different types of memory access requests. Currently only used to verify the feasibility, in the future will be used to calculate the actual gas price.
QTUMCORE-114: [x86] Add various i386+ instructions
Description: Implement (with unit tests for behavior) the following opcodes and groups:
//op(0x62, op_bound_rW_mW); //186
//op(0x64, op_pre_fs_override); //386
//op(0x65, op_pre_gs_override); //386
// op(0x69, op_imul_rW_rmW_immW); //186 (note: uses /r for rW)
// op(0x6B, op_imul_rW_rmW_imm8); //186 (note: uses /r for rW, imm8 is sign extended)
//op(0x82, op_group_82); //rm8, imm8 - add, or, adc, sbb, and, sub, xor, cmp
Task:[x86]Add various i386+ instructions
Description: Implement (and unit test) the following opcodes and groups:
//op(0x62, op_bound_rW_mW); //186
//op(0x64, op_pre_fs_override); //386
//op(0x65, op_pre_gs_override); //386
// op(0x69, op_imul_rW_rmW_immW); //186 (note: uses /r for rW)
// op(0x6B, op_imul_rW_rmW_imm8); //186 (note: uses /r for rW, imm8 is sign extended)
//op(0x82, op_group_82); //rm8, imm8 - add, or, adc, sbb, and, sub, xor, cmp
QTUMCORE-115: [x86] Implementer more i386+ opcodes
Description: Implement with unit tests the following opcodes:
(notice opx is extended opcode)
//op(0xC8, op_enter); //186
For(int i=0;i<16;i++)
{opx(0x80+i, op_jcc_relW); //386 opx(0x90+i, op_setcc_rm8); //386 }
Opx(0x02, op_lar_rW_rmW);
Opx(0x03, op_lsl_rW_rmW);
Opx(0x0B, op_unknown); //UD2 official unsupported opcode
Opx(0x0D, op_nop_rmW); //nop, but needs a ModRM byte for proper parsing
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA2, op_cpuid); //486
Opx(0xA3, op_bt_rmW_rW); //386
Opx(0xA4, op_shld_rmW_rW_imm8); //386
Opx(0xA5, op_shld_rmW_rW_cl); //386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAA, op_rsm); //386
Opx(0xAB, op_bts_rmW_rW); //386
Opx(0xAC, op_shrd_rmW_rW_imm8); //386
Opx(0xAD, op_shrd_rmW_rW_cl); //386
Make sure to remove these opcodes from the commented todo list as they are implemented
Task: [x86] Implement More i386+ Instructions
Description: Implements the following opcodes and unit tests:
(Note that opx is an extended opcode)
//op(0xC8, op_enter); //186
For(int i=0;i<16;i++)
{opx(0x80+i, op_jcc_relW); //386 opx(0x90+i, op_setcc_rm8); //386 }
Opx(0x02, op_lar_rW_rmW);
Opx(0x03, op_lsl_rW_rmW);
Opx(0x0B, op_unknown); / / UD2 official unsupported opcode
Opx(0x0D, op_nop_rmW); //nop, but requires a ModRM byte for proper parsing
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA2, op_cpuid); //486
Opx(0xA3, op_bt_rmW_rW); //386
Opx(0xA4, op_shld_rmW_rW_imm8); //386
Opx(0xA5, op_shld_rmW_rW_cl); //386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAA, op_rsm); //386
Opx(0xAB, op_bts_rmW_rW); //386
Opx(0xAC, op_shrd_rmW_rW_imm8); //386
Opx(0xAD, op_shrd_rmW_rW_cl); //386
After these opcodes are implemented, make sure to remove them from the commented TODO list.
QTUMCORE-118: Implement remaining opcodes in x86lib
Description: The remaining opcodes that do not result in an error or change of behavior should be implemented with unit tests. Take particular care and use many references for some of the weird opcodes, like nop_rm32.
Task: Implementing remaining x86lib opcodes
Description: The remaining opcodes that do not cause errors or behavior changes should be implemented through unit tests. Take special care and refer to some weird opcodes, such as nop_rm32.
The above series of tasks further adds support for i386+ opcodes and implements the rest of the necessary remaining opcodes. At this point x86lib can already support most i386+ instructions
QTUMCORE-117: Begin leveldb-backed database for x86 contracts
Description: For this story, the code work should done as a sub-project from Qtum Core, and can be done direclty in the Qtum Core github. For now, unit and integration tests should be used to confirm functionality. It will be integrated into Qtum Core later. You might need to modify Qtum Core some so that the project is built with proper dependencies. This story will implement the beginnings of a new database that will be used for smart contracts. This will only store meta-data, contract Bytecode, and constructor data for right now:
The leveldb dataset for this data should be named "contracts". The key for this dataset should be a 256-bit contract address (exact format will be specified later) awarded as a hex string.
The value data should contain the following:
• txid of contract creation (with this the chainstate db can be used to lookup blockhash)
• VM version
• contract creation parameters (see "contract deployment" page in design)
• contract creation data (the constructor data)
• contract bytecode
The interface for reading and writing into this database should be clear and extensible. Although it is being designed for the x86 VM, other VMs in the future will also use it.
Task: Implementing a leveldb database in an x86 contract
Description: For this task, code should be written in the Qtum Core subproject and can be done directly on the Qtum Core github. Currently, unit tests and integration tests should be used to confirm the correctness of the function. The following code will be integrated into Qtum Core. It may be necessary to modify the Qtum Core appropriately so that the project has the appropriate dependencies. This task will implement a prototype of a new database that can be used for smart contracts. This database currently only stores meta-data, contract bytecode, and constructor data.
The data leveldb data set should be named "contract." The key of the data set should be the contract address of a 256-digit hexadecimal string (the specific format will be specified later).
Value data should contain the following sections:
• Contract created transaction id (chain state database can use it to find block hash)
• Virtual Machine Version
• Contract creation parameters (see "Contract Deployment" page in the design)
• Contract creation data (constructor data)
• Contract bytecode
The interface for database reads and writes should be clear and extensible. Although designed for x86 virtual machines, other virtual machines can be used in the future.
The above task implements the most basic leveldb database available for x86 contracts. Currently, this database only stores some specific data such as contract codes, which can be expanded in the future. In addition, the design emphasizes the universality of the interface and facilitates the invocation of other virtual machines in the future.
QTUMCORE-119: Research needed functions in Qtum's version of libc
Description: We should evaluate the C99 standard library specifications to determine which functions should be supported in the x86 VM, with easy to use tooling provided to developers (ie, a custom toolchain). List the headers and functions that are common enough to warrant support , as well as is agnostic to the operating system, or can some way fit into the operating system like model of Qtum's x86 VM.
Task: To study the functions required in the libc version of Qtum
Description: We should evaluate the C99 standard library specification to determine which features should be supported in the x86 virtual machine and make it easier to use the tools provided to the developer (for example, a customized tool chain). Lists the most common function headers and functions that must be supported. These function headers and functions are agnostic to the operating system, or to some extent suitable for operating systems similar to the Qtum x86 virtual machine model.
Based on the c99 standard library specification, the Qtumx86 virtual machine implements a simplified version of the libc library for use by smart contract developers.
QTUMCORE-126: [x86] [Compiler] Figure out and document a way of compiling/packaging the QtumOS GCC toolchain for Windows, Linux, and OSX
Description: We should evaluate the C99 standard library specifications to determine which functions should be supported in the x86 VM, with easy to use tooling provided to developers (ie, a custom toolchain). List the headers and functions that are common enough to warrant support , as well as is agnostic to the operating system, or can some way fit into the operating system like model of Qtum's x86 VM.
Task:[x86][Compiler] Finding and documenting a way to compile/package QtumOS GCC toolchain for Windows, Linux and OSX
Description: As a contract developer, I don't want to compile the QtumOS toolchain when developing x86 virtual machine contracts.
For this task, study and document how to build the QtumOS GCC toolchain for Windows, Linux and OSX. Using this toolchain on all platforms should have the same experience. Following this document, anyone should be able to compile the pre-built version of GCC.
In order to use the same compiler tool for any common platform, the above task implements a cross-platform, pre-compiled gcc tool for smart contract developers*.*
QTUMCORE-127: [x86] [libqtum] Add basic blockchain data APIs
Description: As a contract devleoper, I want to be capable of getting basic blockchain data like network weight without needing to know how to write assembly code.
For this story, create a new project for libqtum to compile to libqtum.a using the QtumOS compiler, and place all definitions in a qtum.h file. The first operations to be added are some basic system calls for the following:
• Access to past 256 block hashes
• Block gas limt
• MPoS staking address for block (only the 0th address indicating the block creator)
• Current block difficulty
• Previous block time
• Current block height
These functions are not yet built into the x86 VM or Qtum, so these will just be mocks for now that can't be beta until later.
API list:
previousBlockTime() -> int32 – syscall(0)
• blockGasLimit() -> int64 – syscall(1, &return);
• blockCreator() -> address_t – syscall(2, &return);
• blockDifficulty() -> int32 – syscall(3);
blockHeight() -> int32 – syscall(4);
• getBlockHash(int number) -> hash_t (32 bytes) – syscall(5, number, &return);
Note, this inline assembly code can be used as a template for safely using the "int" opcode from C code, and should be capable of being put into a .S assembly file and used via:
//in C header
Extern long syscall(long syscall_number, long p1, long p2, long p3, long p4, long p5, long p6);
//in .S file
User mode
.global syscall
Long syscall(long number, long p1, long p2, long p3, long p4, long p5, long p6)
Syscall:
Push %ebp
Mov %esp, %ebp
Push %edi
Push %esi
Push %ebx
Mov 8+0*4(%ebp), %eax
Mov 8+1*4(%ebp), %ebx
Mov 8+2*4(%ebp),%ecx
Mov 8+3*4(%ebp), %edx
Mov 8+4*4(%ebp), %esi
Mov 8+5*4(%ebp), %edi
Mov 8+6*4(%ebp), %ebp
Int $0x40
Pop %ebx
Pop %esi
Pop %edi
Pop %ebp
Ret
Task:[x86][libqtum]Add basic blockchain data APIs
Description: As a contract developer, I hope to obtain basic blockchain data, such as network weight, without writing assembly code.
For this task, create a new project for libqtum, compile to libqtum.a using the QtumOS compiler, and put all definitions in the qtum.h file. The first operation that needs to be added is the basic system call to the following:
• Access to past 256 block hashes
• Block gas limit
• MPoS staking address of the block (only the creator of the 0th address indicator block)
• Current block difficulty
• Time of previous block
• The height of the current block
These features have not yet been built into x86 virtual machines or Qtum, so these are only temporary simulations that can be tested later
API list:
previousBlockTime() -> int32 – syscall(0)
• blockGasLimit() -> int64 – syscall(1, &return);
• blockCreator() -> address_t – syscall(2, &return);
• blockDifficulty() -> int32 – syscall(3);
blockHeight() -> int32 – syscall(4);
• getBlockHash(int number) -> hash_t (32 bytes) – syscall(5, number, &return);
Note that this inline assembly code can be used as a template for safely using the "int" opcode of C code and should be able to be put into an .S assembly file and used by:
//in C header
Extern long syscall(long syscall_number, long p1, long p2, long p3, long p4, long p5, long p6);
//in .S file
User mode
.global syscall
Long syscall(long number, long p1, long p2, long p3, long p4, long p5, long p6)
Syscall:
Push %ebp
Mov %esp, %ebp
Push %edi
Push %esi
Push %ebx
Mov 8+0*4(%ebp), %eax
Mov 8+1*4(%ebp), %ebx
Mov 8+2*4(%ebp),%ecx
Mov 8+3*4(%ebp), %edx
Mov 8+4*4(%ebp), %esi
Mov 8+5*4(%ebp), %edi
Mov 8+6*4(%ebp), %ebp
Int $0x40
Pop %ebx
Pop %esi
Pop %edi
Pop %ebp
Ret
The basic data of the blockchain is very useful for smart contract writing, but it is very difficult for ordinary smart contract developers to obtain this data without providing more tools. The above task provides an API for acquiring basic block data, enabling developers to quickly obtain relevant block data, thereby improving the efficiency of smart contract development.
QTUMCORE-128: [x86] [VM] Add very basic gas system
Description: As a contract devleoper, I want to test how intensive my prototype x86 smart contracts will be on a real blockchain.
For this story, add a very basic gas model to the x86 VM. There should be a new option added to Execute() that allows for specifying an absolute gas limit that execution will error upon hitting. It should also be possible to retrieve how much Gas was used during the execution of the program. For this basic gas model, each instruction is 1 gas. It is ok if there are edge cases where an instruction might not be counted.
Task: [x86][VM] Adds the Most Basic Gas System
Description: As a contract developer, I want to test the strength of my prototype x86 smart contract on the real blockchain.
For this task, add a very basic gas model to the x86 virtual machine. There should be a new option added to Execute() that allows you to specify an absolute gas limit, as long as you reach that value and you will get an error. It should also be possible to calculate how much gas is used during program execution. For this basic gas model, each instruction is 1gas. It is also possible if there are boundary scenes where the instruction may not be calculated.
The above task implements the most basic gas system of the x86 virtual machine, and can be used to calculate the specific consumed gas of the contract in the real blockchain.
QTUMCORE-129: [x86] [DeltaDB] Add very basic prototype version of DeltaDB
Description: As a contract developer, I want my prototype x86 contracts to persist within my own personal blockchain so that I can do more than just execute them. I need to be able to call them after deployment.
Right now, we will only concern ourselves with loading and writing contract bytecode. The key thus should be "bytecode_%address%" and the value should be the raw contract bytecode. The contract bytecode will have an internal format later so that bytecode, constant Data, and contract options are distinguishable in the flat data.
The exposed C++ class interface should simply allow for the smart contract VM layer to look up the size of an address's code, load the address's code into memory, and write code from memory into an address's associated data store.
Look at how the leveldb code works for things like "txindex" in Qtum and model this using the Bitcoin database helper if possible. There is no need for this to be tied to consensus right now. It is also also to ignore block disconnects and things That would cause the state to be reverted in the database.
Please do all work based on the time/qtumcore0.15 branch in Qtum Core for right now. Also, for the format of an "address", please look for "UniversalAddress" in the earlz/x86-2 branch, and copy the related Code if needed.
Task: [x86][DeltaDB] Add the most basic version of the DeltaDB prototype
Description: As a contract developer, I hope that my prototype x86 contract can exist in my own blockchain so that all I can do is not just run them. I hope to be able to call them after deployment.
For now, we only care about loading and writing contract bytecodes. Therefore, the key should be "bytecode_%address%" and the value should be the original contract bytecode. Contract bytecodes will have an internal format so that bytecode, constant data, and contract options can be distinguished in flat data.
The exposed C++ class interface should allow the smart contract virtual machine layer to look up the size of the address code, load the address code into memory, and write the code from memory into the address's associated data store.
Look at how the leveldb code for things like "txindex" in Qtum works, and if possible, model it using the Bitcoin database helper. There is no need to associate this issue with consensus. It is also possible to temporarily ignore the disconnection of the block and cause the state of the database to recover.
Now do all the work based on Qtum Core's time/qtumcore0.15 branch. In addition, for the "address" format, look for "UniversalAddress" in the earlz/x86-2 branch and copy the relevant code if necessary.
The above task adds the most basic database DeltaBD to the x86 virtual machine, which can be used to store contract status and is a necessary condition for contract invocation*.*
QTUMCORE-130: [x86] [UI] Add "createx86contract" RPC call
Description: As a smart contract developer, I want to be capable of easily deploying my contract code no too much worrying.
In this story, add a new RPC call named "createx86contract" which accepts 4 arguments: gas price, gas limit, filename (ELF file), and sender address.
The ELF file should be tore apart into a flat array of data refer the contract data to be put onto the blockchain.
  1. int32 size of options
  2. int32 size of code
  3. int32 size of data
  4. int32 (unused)
  5. options data (right now, this can be empty, and size is 0)
  6. code memory data
  7. data memory data
Similar ELF file processing exists in the x86Lib project and that code can be adapted for this. Note that there should be some way of returning errors and warnings back to the user in case of problems with the ELF file.
After the contract data is extracted and built, a transaction output of the following type should be constructed (similar to create contract)
OP_CREATE
The RPC result should be similar to what is returned from createcontract, except for a warnings/errors field should be included, and the contract address should be a base58 x86 address, and any other fields invalid for x86 should be excluded
Task: [x86][UI] Add "createx86contract" RPC Call
Description: As a developer of smart contracts, I hope to deploy my contract code very simply.
In this task, a new RPC call named "createx86contract" is added, which accepts four parameters: gas price, gas limit, filename (ELF file), and the sender address.
The ELF file should be split into a set of data indicating that the contract data was placed on the blockchain.
  1. int32 size of options (options, shaping 32-bit size)
  2. int32 size of code (code, shaping 32-bit size)
  3. int32 size of data (data, integer 32-bit size)
  4. int32 (not used)
  5. options data (options data, now empty, size 0)
  6. code memory data
  7. data memory data
There is a similar ELF file processing in the x86lib project, and its code can be modified according to the requirements here. Note that when there is a problem with the ELF file, there should be some way to return errors and warnings to the user.
After extracting and building the contract data, a transaction output of the following type will be constructed (similar to createcontract)
OP_CREATE
The RPC result should be similar to the one returned in createcontract, except that it contains a warning/error field, and the contract's address should be a base58 encoded x86 address. Any other fields that are invalid for x86 should be excluded.
The above task adds a new qtum rpc call, namely createx86contract. The rpc can load smart contracts directly from the elf executable file and deploy it to the blockchain. It also adds error return mechanisms to allow developers to know the contract deployment status.
summary
This chapter contains the most critical implementation details of the Qtum x86 virtual machine. Based on these tasks, the Qtum team has implemented the first virtual machine that can deploy and run x86 smart contracts. It also provides tools for writing smart contracts in C language. The prototype now supports writing contracts in C language. The implementation of the Qtum x86 virtual machine continues, and the follow-up Qtum project team will continue to expose the original design documents. Interested readers will continue to pay attention.
submitted by thisthingismud to Qtum [link] [comments]

InterValue Project Weekly: 20181112-20181118

InterValue Project Weekly: 20181112-20181118
https://preview.redd.it/igyyo1hce9z11.jpg?width=900&format=pjpg&auto=webp&s=52e74c9a735118058a61d3a77f462704e4671ba9

Part 1. Development Progress of InterValue.


1. Project Progress
(1) Full-node development: we completed the cross-chain related interface’s definition, development, and initial joint debugging.
(2) Local full-node development: we completed the new design schemes to achieve anti-split attacks; we completed the initial development of the transaction data part of the MySQL database upgrade; we completed cross-chain related interface’s definition, development, and initial joint debugging.
(3) Main chain test and code refactoring: we tested the current TPS of the shard, analyzed the performance of each stage. Then, we located the bottleneck point and completed the reconstruction of the HTTP interface’s framework code.
(4) Light node 3.2.0-chat function: we completed the test, acceptance, release, and the test report.
(5) Light Node 3.2.1-optimization: we completed the development plan, and the optimization task achieved 30%.
(6) Light node 3.3.0-multi-chain wallet: we completed the prototype review and the transformation of the bitcoin wallet RPC call mode. We completed the INVE wallet and the joint debugging of main chain’s new transaction interface.
(7) Smart Contract: we completed contract compilation, deployment, call execution, and updated world status for numeric type calculations. Conditional transfer, development, and testing of the current contract are in progress.
(8) Cross-chain: we completed the initial joint interface with the main chain node.

2. Whitepaper
(1) We modified some regular details.


Part 2. Strategic Cooperation.


1. InterValue (INVE) About to Get Listed on bitget
InterValue (INVE, ERC20 Token) is expected to get listed on bitget on November 15 and trading will start on November 16. Two trading pairs INVE/ETH and INVE/USDT will be available. Bitget is the third exchange on which InterValue gets listed after FCoin and Hotbit. InterValue plans to increase its listing count.
https://preview.redd.it/6w2wnpuzd9z11.png?width=1000&format=png&auto=webp&s=f072c9cfe9a39ece1a32179e6d54eb436cbf501c

Part 3. Media Activities.


1. Bounty Program: The Suggestion on Ecological Construction Policy

The activities for asking suggestion on Ecological Construction Policy of the entire community were officially closed on November 14th. The event for collecting opinions from overseas communities was officially opened on November 13th.
https://preview.redd.it/m6yhq1k1e9z11.png?width=1000&format=png&auto=webp&s=5e125db75b745d4759d4dfc23af13cfbb5dce6fd
2、InterValue Participates in the "EXPLORING FUTURE" Meetup

The event of "EXPLORING FUTURE", hosted by Benrui Capital, Starwin Capital, begins at the Roosevelt Mansion in Shanghai Bund on November 15, 2018. More than 30 investment institutions participated in the exchange. InterValue’s Operations Manager Wen gave a speech at the opening ceremony and won warm applause.
https://preview.redd.it/1ii5yrf2e9z11.jpg?width=1080&format=pjpg&auto=webp&s=2dc86389b75345968592d2fcb42812690cab1797

https://preview.redd.it/29edoq93e9z11.jpg?width=900&format=pjpg&auto=webp&s=458d3fe3b34ab7d6d2883ec242f45649bc185c67

https://preview.redd.it/fauqpj64e9z11.jpg?width=1080&format=pjpg&auto=webp&s=2654c88ef408f2065a0eb0e85eb6c7da831994b0
3、Barton Chao, the founder of InterValue, gave a speech in Beijing's first blockchain restaurant

Barton Chao, the founder of InterValue, gave a speech in Beijing's first blockchain restaurant on November 17, 2018. The theme was about “Practical Blockchain Infrastructure Industry”. In this lecture, Barton Chao focused on "what functional features should be available in the practical blockchain; how to judge those scenarios suitable for blockchain technology; the implementation of supply chain financial scenarios; the practice of cross-border trade scenarios; detailed explanations on aspects such as transaction flow practice.
https://preview.redd.it/a8j5q765e9z11.jpg?width=1267&format=pjpg&auto=webp&s=87b1836baa51e6355909f30261128d26fc4ede2b
Part 4. The Historical Process of InterValue.

(1) In August 2017, Barton Chao envisioned the positioning, vision, and functionality of InterValue.
(2) From September to October in 2017, InterValue created a team and refined the functions.
(3) In November 2017, InterValue created the R&D team and began to write a white paper and a development plan.
(4) In January 2018, InterValue completed the first draft of the Chinese and English white papers.
(5) From February to March in 2018, InterValue completed Chinese and English white papers from InterValue 1.0 to 4.0.
(6) In April 2018, the InterValue v1.0 testnet was completed and put into practice.
(7) In May 2018, the development of the InterValue v2.0 testnet was launched.
(8) In June 2018, the InterValue v1.0 testnet wallet Demo was launched.
(9) In June 2018, the InterValue v2.0 HashNet consensus mechanism was verified.
(10) In June 2018, the InterValue v2.0 exceeded a million TPS.
(11) In June 2018, the InterValue 2.0 testnet launched the world’s first local full-node, light-node alpha test.
(12) In July 2018, InterValue launched its first bounty program.
(13) In August 2018, the InterValue 2.0 testnet was awarded the TPS performance testing report and testing certificate issued by China Telecommunication Technology Labs.
(14) In August 2018, InterValue’s Chinese and English white papers are updated from 4.0 to 4.5.
(15) In August 2018, InterValue officially released the INVE ERC20 Token.
(16) In August 2018, InterValue team officially established the Xiangjiang Blockchain Research Institute.
(17) In August 2018, InterValue officially established Blockchain Security Division.
(18) In September 2018, InterValue officially got its INVE token listed on FCoin.
(19) In September 2018, InterValue opens the bounty conversion.
submitted by intervalue to InterValue [link] [comments]

InterValue Project Weekly: 20181112-20181118

InterValue Project Weekly: 20181112-20181118

https://preview.redd.it/jd7y84ese9z11.jpg?width=900&format=pjpg&auto=webp&s=74d5c90cba4b92d393ee53ab442a95754d6283ed

Part 1. Development Progress of InterValue.

1. Project Progress
(1) Full-node development: we completed the cross-chain related interface’s definition, development, and initial joint debugging.
(2) Local full-node development: we completed the new design schemes to achieve anti-split attacks; we completed the initial development of the transaction data part of the MySQL database upgrade; we completed cross-chain related interface’s definition, development, and initial joint debugging.
(3) Main chain test and code refactoring: we tested the current TPS of the shard, analyzed the performance of each stage. Then, we located the bottleneck point and completed the reconstruction of the HTTP interface’s framework code.
(4) Light node 3.2.0-chat function: we completed the test, acceptance, release, and the test report.
(5) Light Node 3.2.1-optimization: we completed the development plan, and the optimization task achieved 30%.
(6) Light node 3.3.0-multi-chain wallet: we completed the prototype review and the transformation of the bitcoin wallet RPC call mode. We completed the INVE wallet and the joint debugging of main chain’s new transaction interface.
(7) Smart Contract: we completed contract compilation, deployment, call execution, and updated world status for numeric type calculations. Conditional transfer, development, and testing of the current contract are in progress.
(8) Cross-chain: we completed the initial joint interface with the main chain node.
2. Whitepaper
(1) We modified some regular details.

Part 2. Strategic Cooperation.

1. InterValue (INVE) About to Get Listed on bitget
InterValue (INVE, ERC20 Token) is expected to get listed on bitget on November 15 and trading will start on November 16. Two trading pairs INVE/ETH and INVE/USDT will be available. Bitget is the third exchange on which InterValue gets listed after FCoin and Hotbit. InterValue plans to increase its listing count.

https://preview.redd.it/xxd48chue9z11.png?width=1000&format=png&auto=webp&s=7775362f7ca1ad7305b8eebc5750907877f6712e

Part 3. Media Activities.

1. Bounty Program: The Suggestion on Ecological Construction Policy
The activities for asking suggestion on Ecological Construction Policy of the entire community were officially closed on November 14th. The event for collecting opinions from overseas communities was officially opened on November 13th.

https://preview.redd.it/r3hmrngve9z11.png?width=1000&format=png&auto=webp&s=ced2320cfce87b562191e0c25539e15beb828e17
2、InterValue Participates in the "EXPLORING FUTURE" Meetup
The event of "EXPLORING FUTURE", hosted by Benrui Capital, Starwin Capital, begins at the Roosevelt Mansion in Shanghai Bund on November 15, 2018. More than 30 investment institutions participated in the exchange. InterValue’s Operations Manager Wen gave a speech at the opening ceremony and won warm applause.

https://preview.redd.it/wx9u4nhwe9z11.jpg?width=1080&format=pjpg&auto=webp&s=26d963c1a04555ddf4d6a916959681b65ad51fd5

https://preview.redd.it/mrqk4raxe9z11.jpg?width=900&format=pjpg&auto=webp&s=cdc0dab0888f5cdd04af4339c8219ec0dda5ab27

https://preview.redd.it/v8zour2ye9z11.jpg?width=1080&format=pjpg&auto=webp&s=383e4ee9e31ac58b424131e15bbabba750547ee9


3、Barton Chao, the founder of InterValue, gave a speech in Beijing's first blockchain restaurant
Barton Chao, the founder of InterValue, gave a speech in Beijing's first blockchain restaurant on November 17, 2018. The theme was about “Practical Blockchain Infrastructure Industry”. In this lecture, Barton Chao focused on "what functional features should be available in the practical blockchain; how to judge those scenarios suitable for blockchain technology; the implementation of supply chain financial scenarios; the practice of cross-border trade scenarios; detailed explanations on aspects such as transaction flow practice.

https://preview.redd.it/h5eakdxye9z11.jpg?width=1267&format=pjpg&auto=webp&s=c338e102b2cc2a7da835994239a889754cc18e06
Part 4. The Historical Process of InterValue.
(1) In August 2017, Barton Chao envisioned the positioning, vision, and functionality of InterValue.
(2) From September to October in 2017, InterValue created a team and refined the functions.
(3) In November 2017, InterValue created the R&D team and began to write a white paper and a development plan.
(4) In January 2018, InterValue completed the first draft of the Chinese and English white papers.
(5) From February to March in 2018, InterValue completed Chinese and English white papers from InterValue 1.0 to 4.0.
(6) In April 2018, the InterValue v1.0 testnet was completed and put into practice.
(7) In May 2018, the development of the InterValue v2.0 testnet was launched.
(8) In June 2018, the InterValue v1.0 testnet wallet Demo was launched.
(9) In June 2018, the InterValue v2.0 HashNet consensus mechanism was verified.
(10) In June 2018, the InterValue v2.0 exceeded a million TPS.
(11) In June 2018, the InterValue 2.0 testnet launched the world’s first local full-node, light-node alpha test.
(12) In July 2018, InterValue launched its first bounty program.
(13) In August 2018, the InterValue 2.0 testnet was awarded the TPS performance testing report and testing certificate issued by China Telecommunication Technology Labs.
(14) In August 2018, InterValue’s Chinese and English white papers are updated from 4.0 to 4.5.
(15) In August 2018, InterValue officially released the INVE ERC20 Token.
(16) In August 2018, InterValue team officially established the Xiangjiang Blockchain Research Institute.
(17) In August 2018, InterValue officially established Blockchain Security Division.
(18) In September 2018, InterValue officially got its INVE token listed on FCoin.
(19) In September 2018, InterValue opens the bounty conversion.
submitted by intervalue to u/intervalue [link] [comments]

IRC Log from Ravencoin Open Developer Meeting - Aug 24, 2018

[14:05] <@wolfsokta> Hello Everybody, sorry we're a bit late getting started
[14:05] == block_338778 [[email protected]/web/freenode/ip.72.214.222.226] has joined #ravencoin-dev
[14:06] <@wolfsokta> Here are the topics we would like to cover today • 2.0.4 Need to upgrade - What we have done to communicate to the community • Unique Assets • iOS Wallet • General Q&A
[14:06] == Chatturga changed the topic of #ravencoin-dev to: 2.0.4 Need to upgrade - What we have done to communicate to the community • Unique Assets • iOS Wallet • General Q&A
[14:06] <@wolfsokta> Daben, could you mention what we have done to communicate the need for the 2.0.4 upgrade?
[14:07] == hwhwhsushwban [[email protected]/web/freenode/ip.172.58.37.35] has joined #ravencoin-dev
[14:07] <@wolfsokta> Others here are free to chime in where they saw the message first.
[14:07] == hwhwhsushwban [[email protected]/web/freenode/ip.172.58.37.35] has quit [Client Quit]
[14:08] Whats up bois
[14:08] hi everyone
[14:08] hi hi
[14:08] <@wolfsokta> Discussing the 2.0.4 update and the need to upgrade.
[14:08] <@Chatturga> Sure. As most of you are aware, the community has been expressing concerns with the difficulty oscillations, and were asking that something be done to the difficulty retargeting. Many people submitted suggestions, and the devs decided to implement DGW.
[14:09] <@Tron> I wrote up a short description of why we're moving to a new difficulty adjustment. https://medium.com/@tronblack/ravencoin-dark-gravity-wave-1da0a71657f7
[14:09] <@Chatturga> I have made posts on discord, telegram, bitcointalk, reddit, and ravencointalk.org from testnet stages through current.
[14:10] <@Chatturga> If there are any other channels that can reach a large number of community members, I would love to have more.
[14:10] <@wolfsokta> Thanks Tron, that hasn't been shared to the community at large yet, but folks feel free to share it.
[14:10] When was this decision made and by whom and how?
[14:10] <@Chatturga> I have also communicated with the pool operators and exchanges about the update. Of all of the current pools, only 2 have not yet updated versions.
[14:11] <@wolfsokta> The decision was made by the developers through ongoing requests for weeks made by the community.
[14:12] <@wolfsokta> Evidence was provided by the community of the damages that could be caused to projects when the wild swings continue.
[14:12] So was there a meeting or vote? How can people get invited
[14:12] <@Tron> It was also informed by my conversations with some miners that recommended that we make the change before the coin died. They witnessed similar oscillations from which other coins never recovered.
[14:13] only two pools left to upgrade is good, what about the exchanges? Any word on how many of those have/have not upgraded?
[14:13] <@wolfsokta> We talked about here in our last meeting Bruce_. All attendees were asked if they had any questions or concerns.
[14:13] == blondfrogs [[email protected]/web/freenode/ip.185.245.87.219] has joined #ravencoin-dev
[14:13] == roshii [[email protected]/web/freenode/ip.41.251.25.100] has joined #ravencoin-dev
[14:13] sup roshii long time no see
[14:14] <@Chatturga> Bittrex, Cryptopia, and IDCM have all either updated or have announced their intent to update.
[14:14] == wjcgiwgu283ik3cj [[email protected]/web/freenode/ip.172.58.37.35] has joined #ravencoin-dev
[14:15] sup russki
[14:15] what's the status here?
[14:15] I don’t think that was at all clear from the last dev meeting
[14:15] I can’t be the only person who didn’t understand it
[14:15] <@wolfsokta> Are there any suggestions on how to communicate the need to upgrade even further? I am concerned that others might also not understand.
[14:17] I’m not sold on the benefit and don’t understand the need for a hard fork — I think it’s a bad precedent to simply go rally exchanges to support a hard fork with little to no discussion
[14:17] so just to note, the exchanges not listed as being upgraded or have announced their intention to upgrade include: qbtc, upbit, and cryptobridge (all with over $40k usd volume past 24 hours according to coinmarketcap)
[14:18] <@wolfsokta> I don't agree that there was little or no discussion at all.
[14:19] <@wolfsokta> Looking back at our meeting notes from two weeks ago "fork" was specifically asked about by BrianMCT.
[14:19] If individual devs have the power to simple decide to do something as drastic as a hard fork and can get exchanges and miners to do it that’s got a lot of issues with centralization
[14:19] <@wolfsokta> It had been implemented on testnet by then and discussed in the community for several weeks before that.
[14:19] == under [[email protected]/web/freenode/ip.72.200.168.56] has joined #ravencoin-dev
[14:19] howdy
[14:19] Everything I’ve seen has been related to the asset layer
[14:19] I have to agree with Bruce_, though I wasn't able to join the last meeting here. That said I support the fork
[14:20] Which devs made this decision to do a fork and how was it communicated?
[14:20] well mostly the community made the decision
[14:20] Consensus on a change is the heart of bitcoin development and I believe the devs have done a great job building that consensus
[14:20] a lot of miners were in uproar about the situation
[14:20] <@wolfsokta> All of the devs were supporting the changes. It wasn't done in isolation at all.
[14:21] This topic has been a huge discussion point within the RVN mining community for quite some time
[14:21] the community and miners have been having issues with the way diff is adjusted for quite some time now
[14:21] Sure I’m well aware of that -
[14:21] Not sold on the benefits of having difficulty crippled by rented hashpower?
[14:21] The community saw a problem. The devs got together and talked about a solution and implemented a solution
[14:21] I’m active in the community
[14:22] So well aware of the discussions on DGW etc
[14:22] Hard fork as a solution to a problem community had with rented hashpower (nicehash!!) sounds like the perfect decentralized scenario!
[14:23] hard forks are very dangerous
[14:23] mining parties in difficulty drops are too
[14:23] <@wolfsokta> Agreed, we want to keep them to an absolute minimum.
[14:23] But miners motivation it’s the main vote
[14:24] What would it take to convince you that constantly going from 4 Th/s to 500 Gh/s every week is worse for the long term health of the coin than the risk of a hard fork to fix it?
[14:24] == Tron [[email protected]/web/freenode/ip.173.241.144.77] has quit [Ping timeout: 252 seconds]
[14:24] This hardfork does include the asset layer right? if so why is it being delayed in implementation?
[14:24] <@wolfsokta> Come back Tron!
[14:24] coudl it have been implement through bip9 voting?
[14:24] also hard fork is activated by the community! that's a vote thing!
[14:24] @mrsushi to give people time to upgrade their wallet
[14:25] @under, it would be much hard to keep consensus with a bip9 change
[14:25] <@wolfsokta> We investigated that closely Under.
[14:25] == Tron [[email protected]/web/freenode/ip.173.241.144.77] has joined #ravencoin-dev
[14:25] <@wolfsokta> See Tron's post for more details about that.
[14:25] <@spyder_> Hi Tron
[14:25] <@wolfsokta> https://medium.com/@tronblack/ravencoin-dark-gravity-wave-1da0a71657f7
[14:25] Sorry about that. Computer went to sleep.
[14:26] I'm wrong
[14:26] 2 cents. the release deadline of october 31st puts a bit of strain on getting code shipped. (duh). but fixing daa was important to the current health of the coin, and was widely suppported by current mining majority commuity. could it have been implemented in a different manner? yes . if we didnt have deadlines
[14:27] == wjcgiwgu283ik3cj [[email protected]/web/freenode/ip.172.58.37.35] has quit [Quit: Page closed]
[14:27] sushi this fork does not include assets. it's not being delayed though, we're making great progress for an Oct 31 target
[14:28] I don’t see the urgency but my vote doesn’t matter since my hash power is still CPUs
[14:28] <@wolfsokta> We're seeing the community get behind the change as well based on the amount of people jumping back in to mine through this last high difficulty phase.
[14:28] So that will be another hardfork?
[14:28] the fork does include the asset code though set to activate on oct 30th
[14:28] yes
[14:29] <@wolfsokta> Yes, it will based on the upgrade voting through the BIP9 process.
[14:29] I wanted to ask about burn rates from this group: and make a proposal.
[14:29] we're also trying hard to make it the last for awhile
[14:29] Can you clear up the above — there will be this one and another hard fork?
[14:29] <@wolfsokta> Okay, we could discuss that under towards the end of the meeting.
[14:30] If this one has the asset layer is there something different set for October
[14:30] <@wolfsokta> Yes, there will be another hard fork on October 31st once the voting process is successful.
[14:31] <@wolfsokta> The code is in 2.0.4 now and assets are active on testnet
[14:31] Bruce, the assets layer is still being worked on. Assets is active on mainnet. So in Oct 31 voting will start. and if it passes, the chain will fork.
[14:31] this one does NOT include assets for mainnet Bruce -- assets are targeted for Oct 31
[14:31] not***
[14:31] not active****
[14:31] correct me if I'm wrong here, but if everyone upgrades to 2.0.4 for this fork this week, the vote will automatically pass on oct 31st correct? nothing else needs to be done
[14:31] Will if need another download or does this software download cover both forks?
[14:31] <@wolfsokta> Correct Urgo
[14:32] thats how the testnet got activated and this one shows "asset activation status: waiting until 10/30/2018 20:00 (ET)"
[14:32] Will require another upgrade before Oct 31
[14:32] thank you for the clarification wolfsokta
[14:32] <@wolfsokta> It covers both forks, but we might have additional bug fixes in later releases.
[14:32] So users DL one version now and another one around October 30 which activates after that basically?
[14:33] I understand that, but I just wanted to make it clear that if people upgrade to this version for this fork and then don't do anything, they are also voting for the fork on oct 31st
[14:33] Oh okay — one DL?
[14:33] Bruce, Yes.
[14:33] Ty
[14:33] well there is the issue that there maybe some further consensus bugs dealing with the pruneability of asset transactions that needs to be corrected between 2.0.4 and mainnet. so i would imagine that there will be further revisions required to upgrade before now and october 31
[14:33] @under that is correct.
[14:34] I would highly recommend bumping the semver up to 3.0.0 for the final pre 31st release so that the public know to definitely upgrade
[14:34] @under +1
[14:35] out of curiosity, have there been many bugs found with the assets from the version released in july for testnet (2.0.3) until this version? or is it solely a change to DGW?
[14:35] <@wolfsokta> That's not a bad idea under.
[14:35] <@spyder_> @under good idea
[14:35] @urgo. Bugs are being found and fixed daily.
[14:35] Any time the protocol needs to change, there would need to be a hard fork (aka upgrade). It is our hope that we can activate feature forks through the BIP process (as we are doing for assets). Mining pools and exchanges will need to be on the newest software at the point of asset activation - should the mining hash power vote for assets.
[14:35] blondfrogs: gotcha
[14:35] There have been bugs found (and fixed). Testing continues. We appreciate all the bug reports you can give us.
[14:36] <@wolfsokta> Yes! Thank you all for your help in the community.
[14:37] (pull requests with fixes and test coverage would be even better!)
[14:37] asset creation collision is another major issue. current unfair advantage or nodes that fore connect to mining pools will have network topologies that guarantee acceptance. I had discussed the possibility of fee based asset creation selection and i feel that would be a more equal playing ground for all users
[14:38] *of nodes that force
[14:38] <@wolfsokta> What cfox said, we will always welcome development help.
[14:38] So just to make sure everyone know. When assets is ready to go live on oct 31st. Everyone that wants to be on the assets chain without any problems will have to download the new binary.
[14:39] <@wolfsokta> The latest binary.
[14:39] under: already in the works
[14:39] excellent to hear
[14:39] == UserJonPizza [[email protected]/web/freenode/ip.24.218.60.237] has joined #ravencoin-dev
[14:39] <@wolfsokta> Okay, we've spent a bunch of time on that topic and I think it was needed. Does anybody have any other suggestions on how to get the word out even more?
[14:40] maybe preface all 2.0.X releases as pre-releases... minimize the number of releases between now and 3.0 etc
[14:41] <@wolfsokta> Bruce_ let's discuss further offline.
[14:41] wolfsokta: which are the remaining two pools that need to be upgraded? I've identified qbtc, upbit, and cryptobridge as high volume exchanges that haven't said they were going to do it yet
[14:41] so people can help reach out to them
[14:41] f2pool is notoriously hard to contact
[14:41] are they on board?
[14:42] <@wolfsokta> We could use help reaching out to QBTC and Graviex
[14:42] I can try to contact CB if you want?
[14:42] <@Chatturga> The remaining pools are Ravenminer and PickAxePro.
[14:42] <@Chatturga> I have spoken with their operators, the update just hasnt been applied yet.
[14:42] ravenminer is one of the largest ones too. If they don't upgrade that will be a problem
[14:42] okay good news
[14:42] (PickAxePro sounds like a Ruby book)
[14:43] I strongly feel like getting the word out on ravencoin.org would be beneficial
[14:44] that site is sorely in need of active contribution
[14:44] Anyone can volunteer to contribute
[14:44] <@wolfsokta> Okay, cfox can you talk about the status of unique assets?
[14:44] sure
[14:45] <@wolfsokta> I'll add website to the end of our topics.
[14:45] code is in review and will be on the development branch shortly
[14:45] would it make sense to have a page on the wiki (or somewhere else) that lists the wallet versions run by pools & exchanges?
[14:45] will be in next release
[14:45] furthermore, many sites have friendly link to the standard installers for each platform, if the site linked to the primary installers for each platform to reduce github newb confusion that would be good as well
[14:46] likely to a testnetv5 although that isn't settled
[14:46] <@wolfsokta> Thanks cfox.
[14:46] <@wolfsokta> Are there any questions about unique assets, and how they work?
[14:47] after the # are there any charachters you cant use?
[14:47] will unique assets be constrained by the asset alphanumeric set?
[14:47] ^
[14:47] <@Chatturga> @Urgo there is a page that tracks and shows if they have updated, but it currently doesnt show the actual version that they are on.
[14:47] a-z A-Z 0-9
[14:47] <@Chatturga> https://raven.wiki/wiki/Exchange_notifications#Pools
[14:47] There are a few. Mostly ones that mess with command-line
[14:47] you'll be able to use rpc to do "issueunique MATRIX ['Neo','Tank','Tank Brother']" and it will create three assets for you (MATRIX#Neo, etc.)
[14:47] @cfox - No space
[14:48] @under the unique tags have an expanded set of characters allowed
[14:48] Chatturga: thank you
[14:48] @UJP yes there are some you can't use -- I'll try to post gimmie a sec..
[14:49] Ok. Thank you much!
[14:49] 36^36 assets possible and 62^62 uniques available per asset?
[14:49] <@spyder_> std::regex UNIQUE_TAG_CHARACTERS("^[[email protected]$%&*()[\\]{}<>_.;?\\\\:]+$");
[14:50] regex UNIQUE_TAG_CHARACTERS("^[[email protected]$%&*()[\\]{}<>_.;?\\\\:]+$")
[14:50] oh thanks Mark
[14:51] <@wolfsokta> Okay, next up. I want to thank everybody for helping test the iOS wallet release.
[14:51] <@wolfsokta> We are working with Apple to get the final approval to post it to the App Store
[14:51] @under max asset length is 30, including unique tag
[14:51] Does the RVN wallet have any other cryptos or just RVN?
[14:52] == BruceFenton [[email protected]/web/freenode/ip.67.189.233.170] has joined #ravencoin-dev
[14:52] will the android and ios source be migrated to the ravenproject github?
[14:52] I've been adding beta test users. I've added about 80 new users in the last few days.
[14:52] <@wolfsokta> Just RVN, and we want to focus on adding the asset support to the wallet.
[14:53] == Bruce_ [[email protected]/web/freenode/ip.67.189.233.170] has quit [Ping timeout: 252 seconds]
[14:53] <@wolfsokta> Yes, the code will also be freely available on GitHub for both iOS and Android. Thank you Roshii!
[14:53] Would you consider the iOS wallet to be a more secure place for one's holdings than say, a Mac connected to the internet?
[14:53] will there be a chance of a more user freindly wallet with better graphics like the iOS on PC?
[14:53] the android wallet is getting updated for DGW, correct?
[14:53] <@wolfsokta> That has come up in our discussion Pizza.
[14:54] QT framework is pretty well baked in and is cross platform. if we get some qt gurus possibly
[14:54] Phones are pretty good because the wallet we forked uses the TPM from modern phones.
[14:54] Most important is to write down and safely store your 12 word seed.
[14:54] TPM?
[14:54] <@wolfsokta> A user friendly wallet is one of our main goals.
[14:55] TPM == Trusted Platform Module
[14:55] Ahhh thanks
[14:55] just please no electron apps. they are full of security holes
[14:55] <@spyder_> It is whats makes your stuffs secure
[14:55] not fit for crypto
[14:55] under: depends on who makes it
[14:55] The interface screenshots I've seen look like Bread/Loaf wallet ... I assume that's what was forked from
[14:55] ;)
[14:56] <@wolfsokta> @roshii did you see the question about the Android wallet and DGW?
[14:56] Yes, it was a fork of breadwallet. We like their security.
[14:56] chromium 58 is the last bundled electron engine and has every vuln documented online by google. so unless you patch every vuln.... methinks not
[14:56] Agreed, great choice
[14:57] <@wolfsokta> @Under, what was your proposal?
[14:58] All asset creation Transactions have a mandatory OP_CHECKLOCKTIMEVERIFY of 1 year(or some agreed upon time interval), and the 500 RVN goes to a multisig devfund, run by a custodial group. We get: 1) an artificial temporary burn, 2) sustainable community and core development funding for the long term, after OSTK/Medici 3) and the reintroduction of RVN supply at a fixed schedule, enabling the removal of the 42k max cap of total As
[14:58] *im wrong on the 42k figure
[14:58] <@wolfsokta> Interesting...
[14:59] <@wolfsokta> Love to hear others thoughts.
[14:59] Update: I posted a message on the CryptoBridge discord and one of their support members @stepollo#6276 said he believes the coin team is already aware of the fork but he would forward the message about the fork over to them right now anyway
[14:59] Ifs 42 million assets
[14:59] yep.
[15:00] I have a different Idea. If the 500 RVN goes to a dev fund its more centralized. The 500 RVN should go back into the unmined coins so miners can stay for longer.
[15:01] *without a hardfork
[15:01] <@wolfsokta> lol
[15:01] that breaks halving schedule, since utxos cant return to an unmined state.
[15:01] @UJP back into coinbase is interesting. would have to think about how that effects distribution schedule, etc.
[15:01] only way to do that would be to dynamicaly grow max supply
[15:02] and i am concerned already about the max safe integer on various platforms at 21 billion
[15:02] js chokes on ravencoin already
[15:02] <@wolfsokta> Other thoughts on Under's proposal? JS isn't a real language. ;)
[15:02] Well Bitcoin has more than 21 bn Sats
[15:02] Is there somebody who wants to volunteer to fix js.
[15:02] hahaha
[15:03] I honestly would hate for the coins to go to a dev fund. It doesn't seem like Ravencoin to me.
[15:03] Yep, but we're 21 billion x 100,000,000 -- Fits fine in a 64-bit integer, but problematic for some languages.
[15:03] <@wolfsokta> Thanks UJP
[15:04] <@wolfsokta> We're past time but I would like to continue if you folks are up for it.
[15:04] Yeah no coins can go anywhere centrality contorted like a dev fund cause that would mean someone has to run it and the code can’t decide that so it’s destined to break
[15:05] currently and long term with out the financial backing of development then improvements and features will be difficult. we are certainly thankful for our current development model. but if a skunkworks project hits a particular baseline of profitability any reasonable company would terminate it
[15:05] Yes let’s contibue for sure
[15:05] the alternative to a dev fund in my mind would be timelocking those funds back to the issuers change address
[15:06] But we can’t have dev built in to the code — it has to be open source like Bitcoin and monero and Litecoin - it’s got drawbacks but way more advantages- it’s the best model
[15:06] Dev funding
[15:06] i highly reccommend not reducing the utility of raven by removing permanently the supply
[15:07] == BW_ [[email protected]/web/freenode/ip.138.68.243.202] has joined #ravencoin-dev
[15:07] timelocking those funds accompllishes the same sacrifice
[15:07] @under timelocking is interesting too
[15:07] How exactly does timelocking work?
[15:07] <@wolfsokta> ^
[15:07] I mean you could change the price of assets with the Block reward halfing.
[15:07] == Roshiix [[email protected]/web/freenode/ip.105.67.2.212] has joined #ravencoin-dev
[15:08] funds cant be spent from an address until a certain time passes
[15:08] but in a what magical fairy land do people continue to work for free forever. funding development is a real issue... as much as some might philosphically disagree. its a reality
[15:08] You’d still need a centralized party to decide how to distribute the funds
[15:08] even unofficially blockstream supports bitcoin devs
[15:08] on chain is more transparent imho
[15:09] == Tron_ [[email protected]/web/freenode/ip.173.241.144.77] has joined #ravencoin-dev
[15:09] @UJP yes there are unlimited strategies. one factor that I think is v important is giving application developers a way to easily budget for projects which leads to flat fees
[15:09] If the project is a success like many of believe it will be, I believe plenty of people will gladly done to a dev fund. I don't think the 500 should be burned.
[15:09] *donate
[15:09] centralized conservatorship, directed by community voting process
[15:10] == Tron [[email protected]/web/freenode/ip.173.241.144.77] has quit [Ping timeout: 252 seconds]
[15:10] <@wolfsokta> Thanks Under, that's an interesting idea that we should continue to discuss in the community. You also mentioned the existing website.
[15:10] It would need to be something where everyone with a QT has a vote
[15:10] think his computer went to sleep again :-/
[15:10] I agree UJP
[15:10] with the website
[15:10] No that’s ico jargon — any development fund tied to code would have to be centralized and would therefor fail
[15:11] ^
[15:11] ^
[15:11] ^
[15:11] dashes model for funding seems to be pretty decentralized
[15:11] community voting etc
[15:11] Once you have a dev fund tied to code then who gets to run it? Who mediates disputes?
[15:11] oh well another discussion
[15:11] Dash has a CEO
[15:12] <@wolfsokta> Yeah, let's keep discussing in the community spaces.
[15:12] Dash does have a good model. It's in my top ten.
[15:12] having the burn go to a dev fund is absolute garbage
[15:12] These dev chats should be more target than broad general discussions — changing the entire nature of the coin and it’s economics is best discussed in the RIPs or other means
[15:13] <@wolfsokta> Yup, let's move on.
[15:13] just becuase existing implementation are garbage doesnt mean that all possible future governance options are garbage
[15:13] <@wolfsokta> To discussing the website scenario mentioned by under.
[15:13] the website needs work. would be best if it could be migrated to github as well.
[15:13] What about this: Anyone can issue a vote once the voting feature has been added, for a cost. The vote would be what the coins could be used for.
[15:14] features for the site that need work are more user friendly links to binaries
[15:14] <@wolfsokta> We investigated how bitcoin has their website in Github to make it easy for contributors to jump in.
[15:14] that means active maintenance of the site instead of its current static nature
[15:15] <@wolfsokta> I really like how it's static html, which makes it super simple to host/make changes.
[15:15] the static nature isn’t due to interface it’s due to no contributors
[15:15] no contribution mechanism has been offered
[15:15] github hosted would allow that
[15:16] We used to run the Bitcoin website from the foundation & the GitHub integration seemed to cause some issues
[15:16] its doesnt necessarily have to be hosted by github but the page source should be on github and contributions could easily be managed and tracked
[15:17] for example when a new release is dropped, the ability for the downlaods section to have platform specific easy links to the general installers is far better for general adoption than pointing users to github releases
[15:18] <@wolfsokta> How do people currently contribute to the existing website?
[15:18] they dont?
[15:18] We did that and it was a complete pain to host and keep working — if someone wants to volunteer to do that work hey can surely make the website better and continually updated — but they could do that in Wordpress also
[15:19] I’d say keep an eye out for volunteers and maybe we can get a group together who can improve the site
[15:19] == digitalvap0r-xmr [[email protected]/web/cgi-irc/kiwiirc.com/ip.67.255.25.134] has joined #ravencoin-dev
[15:19] And they can decide best method
[15:20] I host the source for the explorer on github and anyone can spin it up instantly on a basic aws node. changes can be made to interface etc, and allow for multilingual translations which have been offered by some community members
[15:20] there are models that work. just saying it should be looked at
[15:20] i gotta run thank you all for your contributions
[15:20] <@wolfsokta> I feel we should explore the source for the website being hosted in GitHub and discuss in our next dev meeting.
[15:21] <@Chatturga> Thanks Under!
[15:21] == under [[email protected]/web/freenode/ip.72.200.168.56] has quit [Quit: Page closed]
[15:21] <@wolfsokta> Thanks, we also need to drop soon.
[15:21] There is no official site so why care. Someone will do better than the next if RVN is worth it anyway. That's already the case.
[15:21] <@wolfsokta> Let's do 10 mins of open Q&A
[15:22] <@wolfsokta> Go...
[15:23] <@Chatturga> Beuller?
[15:24] No questions ... just a comment that the devs and community are great and I'm happy to be a part of it
[15:24] I think everyone moved to discord. I'll throw this out there. How confident is the dev team that things will be ready for oct 31st?
[15:24] <@wolfsokta> Alright! Thanks everybody for joining us today. Let's plan to get back together as a dev group in a couple of weeks.
[15:25] thanks block!
[15:25] <@wolfsokta> Urgo, very confident
[15:25] Please exclude trolls from discord who havent read the whitepaper
[15:25] great :)
[15:25] "things" will be ready..
[15:25] Next time on discord right?
[15:25] woah why discord?
[15:25] some of the suggestions here are horrid
[15:25] this is better less point
[15:25] == blondfrogs [[email protected]/web/freenode/ip.185.245.87.219] has quit [Quit: Page closed]
[15:25] Assets are working well on testnet. Plan is to get as much as we can safely test by Sept 30 -- this includes dev contributions. Oct will be heavy testing and making sure it is safe.
[15:26] people
[15:26] <@wolfsokta> Planning on same time, same IRC channel.
[15:26] == BW_ [[email protected]/web/freenode/ip.138.68.243.202] has quit [Quit: Page closed]
[15:26] @xmr any in particular?
[15:27] (or is "here" discord?)
[15:27] Cheers - Tron
[15:27] "Cheers - Tron" - Tron
submitted by Chatturga to Ravencoin [link] [comments]

Ubuntu -2 Setting up .bitcoin folder Understanding RPC and IDL #2 Learning Bitcoin 4 - Bitcoin Command Line Helper - Part 1 a brieff introduction to a brand new bitcoin hack ALL NEW***29-10-2017 m1xolyd1an - YouTube

The Bitcoin.com mining pool has the lowest share reject rate (0.15%) we've ever seen. Other pools have over 0.30% rejected shares. Furthermore, the Bitcoin.com pool has a super responsive and reliable support team. Remote Procedural Call Server: A remote procedural call (RPC) server is a network communication interface that provides remote connection and communication services to RPC clients. It enables remote users or RPC clients to execute commands and transfer data using RPC calls or over the RPC protocol. Expands the RPC help text for changes to the importmulti RPC method. Commands sent over the JSON-RPC interface and through the bitcoin-cli binary can now use named arguments. ... #9269 43e8150 Align struct COrphan definition (sipa) #9820 599c69a Fix pruning test broken by 2 hour manual prune window (ryanofsky) #9824 260c71c qa: Check return code when stopping nodes (MarcoFalke) #9875 50953c2 tests: Fix dangling pwalletMain pointer in wallet tests (laanwj) #9839 ... Der Bitcoin Trader ist eine Online-Plattform. Diese scannt die Kursverläufe und spekuliert anschließend auf dem Kryptomarkt. Anleger, welche den Bitcoin Trader nutzen wollen, müssen sich zunächst einmal auf der Plattform anmelden und anschließend einen bestimmten Betrag einzahlen, damit sie mit dem Handel beginnen können.

[index] [33245] [26823] [35977] [3182] [44180] [44747] [28538] [50987] [1059] [1098]

Ubuntu -2 Setting up .bitcoin folder

Start trading Bitcoin and cryptocurrency here: http://bit.ly/2Vptr2X While there are many ways you can make money with Bitcoin in the end there are no free m... HEY HEY EVERYONE AND WELCOME HA HA I AM DELIGHTED TO SAY AFTER 5 YEARS OF HARD WORK AND STRESS FULL MONTHS I HAVE FINALLY CONQUERED THE BLOCK CHAIN AND BITCOIN CORE. THIS FIRST VIDEO IS A BRIEF ... Bitcoin JSON-RPC Tutorial 4 - Command Line Interface - Duration: 5:14. m1xolyd1an 10,467 views. 5:14 . AWS - Bitcoin Full Node Setup - Duration: 7:46. Blockchain Explained 505 views. 7:46. Bitcoin ... bitcoin definition bitcoin debit card bitcoin documentary bitcoin dollar bitcoin dice bitcoin dead bitcoin download bitcoin dollar value bitcoin debit card usa bitcoin drugs bitcoin.d d-wave ... Bitcoin JSON-RPC Tutorial 4 - Command Line Interface - Duration: 5 minutes, 14 seconds. 10,980 views; 4 years ago; 8:10. Bitcoin JSON-RPC Tutorial 3 - bitcoin.conf - Duration: 8 minutes, 10 ...

#