Lightning Bitcoin light weight wallet - GitHub

AMA - Community Edition

Updated:
11) $5m buyback
12) Release of yp part 3?
13) It is allegedly possible that ICX supply can be doubled in only 4 years thanks to a whopping 20% annual token inflation
14) One of the things that got me excited about crypto was that there was no inflation. I'm a bit disappointed in Icons approach here.
15) Where is the DEX?
16) How far are we from interoperability? Am I correct in saying that interoperability is years from completion?
I'll be answering all questions to the best of my knowledge, this list will update regularly.
1) Clear description how icx will go up by benefiting from the line partnership. -> 2 or 3 practical examples.
Don't forget Unchain is a joint venture, so Unchain is ICON's company as well, their success is directly beneficial to ICON. In a recent interview w Brad, Henry also shed some light regarding this JV and that it is way beyond a simple partnership agreement https://youtu.be/paFYyt1hVWc?t=155
2) Clear description how icx will go up by building private blockchains and connecting them. -> 2 or 3 practical examples.
I answered this to someone on telegram a couple days ago. Here's my example,
"So I asked what's the use for icx with private chains. They have no reason to connect to the public chain and they have no reason to tokenzie their business."
The missing link is interoperability. The private chains need a way to communicate w each other, this is actually how the ICON project was conceived. ICONLOOP(loopchain then) offered blockchain solutions to enterprises and consortiums, but they had no way to interoperate
So I think the argument originated from, if the design paradigm is emergent for private chains to go public, or interoperate through a public chain as a common block
We've heard about those use cases and see actual implementations from U-coin vending machines to hospitals making insurance claims etc
I agree in some cases it doesn't make sense for private chains to go public, if its designing a problem to solve, lets not do that
but i'd say, a random guess, that 90%+ of the private chains have a reason to connect, much like intranet/internet
Let me try another example, we've heard the hospital/insurance too many times
Let's say there's a trade financing supply system of a large manufacturer w thousands of vendors
before their enrollment, you'll probably need to do some identity and reputation check in the public chain (common services like ID validation should readily be available as a public service, like chainID)
that will validate their legitimacy.. then next step is prolly for the vendors who need the trade financing where they need a more complex system like a stable coin to avoid volatility.. and move the money around
instead of rebuilding a coin, they could adopt a coin system within the ICON network
then what happens next.. i guess disputes w goods lost or quality problems.. again, vendors can call for a public arbitration system where there'll be a network of lawyers who specialize in cross-border disputes or arbiters to provide the service
so we need a chain of services that can be called throughout the life cycle, interoperable between private and public chains
there are plenty more use cases, but its not a hard choice to make, its definitely possible to have a common meeting point while maintaining sensitive information within their local blockchain
In the example above, nothing is tokenized, their businesses are on the private blockchain without a native coin, but they use the common services from the public like stable coins or arbitration system
3) Monthly or quartal reports on partnerships, marketing, and the tech.
You mean something like this? https://medium.com/helloiconworld/icon-3q-achievements-8c42ea798a0b
4) Opinion why korean people dont bring icx volume on korean exchanges.
I don't think even president Moon has an answer to this :P But are people really this patriotic when it comes to money? Do Americans invest in American ICOs for being made in USA? I guess some will, but this is not (and shouldn't be) the main driving force of token demand.
5) Clarification what kind of understanding we should have about this 124 teampower - are they employees with 40 hours/week working contracts or just 2 hours, cooperations partners, freelancer, what ever.
I paid a visit to the KR office a couple months back, it was like a giant coding factory running full steam. I can attest to this, they're full time employees working around the clock.
6) Roadmap - stop giving yourself room for delays and interpretations by not offering a roadmap.
My suggestion on this one is to have a % completion roadmap with change logs. I think most people are more interested in progress, less deadlines.
7) Quarterly AMAs.
Sounds good.
8) Why the hell are ICON members still advisors at Sentinel Protocol, a ICO that promoted itself using icon as blockchain and then moving to EOS.
As far as I can tell, the two teams are still in good relationships. Timing was unfortunate, SP always had their first product (uppward) scheduled to launch shortly after their fundraising. Public presale also ended a lot faster than expected (scheduled to run for a week, ended in 3 minutes). During the period ICON was migrating to mainnet V3 and doing token swap. It made sense for them to deploy on a working platform, without compromising their schedule. Their team also said that they haven't ruled out the possibility to migrate back to ICON (although I think its less likely this day).
9) Spend some money on an english translation expert for you social media appearance.
The translations (YouTube subtitles) were a bit sloppy I agree, understandable enough but they should definitely spend more time proof reading, professional presentation is a thing.
10) How much from the received ICO money/ether has been provided directly or indirectly to iconloop.
The raised ETH from ICO are barely spent, you can check on etherscan from the contribution address.
11) $5m buyback
From the key announcement by ICON foundation’s CFO Jay, the repurchase program is a pending legal matter, after consultation with law firms they’ll proceed with the buyback. https://youtu.be/keDitkWssv8?t=160
The team stated two main intentions for conducting this program,
If you read between the lines from the buyback announcement https://medium.com/helloiconworld/key-announcements-from-icon-8ea0f5a18d6f
Repurchases under the foundation’s program will be made in open market or privately negotiated transactions subject to market conditions, applicable legal requirements, and other relevant factors.
What this is saying is that, the buyback has no intention to create short term pumps, otherwise all purchases would’ve been made in the open market under a timed schedule. What this also implies is that, there won’t be a public wallet with an open schedule, to avoid legal obligations (insider trading) or unintended purposes (manipulation).
So what is to be expected? Giving a deadline won't make sense because everything can be timed, so my take is that an announcement will be made after the repurchase has been completed. I don't think anyone can take advantage of this program but will still benefit directly with $5M worth of tokens off the market supply.
12) Release of yp part 3?
This is expectedly a highly anticipated yellow paper, as it will likely outline all the details we need to know about staking. This YP however is not just a simple table with your annual returns, this is also technically far more complex than the previous two YPs.
I provided a very simplified explanation for IISS in this thread: https://twitter.com/2infiniti/status/1020141186797846529
IISS is however a lot more complicated than this, it is a full AI based incentive scoring system to explore the optimal incentive scheme to vitalize the ecosystem. On top of incentives, it is also the base metrics for governance policies (voting). Incentives are designed with token economic studies, to reinforce target behavior, based on operant conditioning principles, eg. dormant accounts, distribution schemes based on activity levels, penalties for malicious nodes etc, and it is very difficult to get right.
If you look into the WP, IISS further explored with things like mitigation of inequalities, weighted average and adjustment, efficiency of IISS, fairness of distribution, prevention of misusage and many other topics explored in depth.
The point is, this YP is very complex, and personally I’d wish the team to take as much time as it needs to get it done right. IISS will ultimately decide the overall health of our ecosystem, its sustainability and well, our passive income.
With that said, I am also with you that I’d love to see the details asap, as I have plans to build a tool similar to the Virtual Step Calculator where people can easily calculate their returns. From the announcement at least, it does look like the team is close to completion and labeled the release "soon", so let's just have a little patience and let them do all the necessary last checks.
Also as a reality check, YPs are researches that need to be formalized, implemented and iterated enough times before an official release. So please don’t expect to start staking right away when YP pt3 sees the light.
13) It is allegedly possible that ICX supply can be doubled in only 4 years thanks to a whopping 20% annual token inflation
Please go to this thread for my explanation: https://twitter.com/2infiniti/status/1060397068852748288
14) One of the things that got me excited about crypto was that there was no inflation. I'm a bit disappointed in Icons approach here.
Most crypto token issuance models can be broken down into these 3 categories
All of the above models can work in their own ways, depending on the behavior its trying to incentivize. Sustainable crypto economies are backed by a recursive loop of value transfer that all participants are incentivized to participate in. The goal is to create an incentive loop that all parties act in their own self-interest, then creating greater value.
Let’s take a look at bitcoin’s incentive loop, a simple model where mining is profitable, more miners create more security and security adds intrinsic value.
Mine bitcoin -> market dynamics decide value -> incentive to mine -> security of network increases -> more incentive to mine ←|
Augur’s case
Trusted prediction platform -> more stakes in events -> more incentive for REP holders to verify truth -> more people verifying, more trusted ←|
In ICON’s case, incentives are centered around i_score, which is a function of activities within the network. The incentive loop would look something like this
I_score rewards and governance control (votes) -> more incentive to participate in activities and governance policies -> increased network security and activity ←|
Similar incentive loop found in SCORE
SCORE staking (virtual steps) -> increased activities -> sustainable SCOREs ←|
Now for continuous issuance models, the goals are no different from other models. They want to issue tokens, just enough that it is optimal for maintaining security and encourage participations, creating a healthy incentive loop.
But can’t these models infinitely issue to a point where my money is worth next to nothing?
Yes, this is in theory possible. For Ethereum, with majority of network miners approving such change (say removing ice age), and a new Ethereum client to accommodate this change, resulting in an issuance similar to a 51% attack. Since issued ETH is also linked to the value of a single token, this will render ETH much less valuable. In practice, this is extremely unlikely to happen, as miners are financially discouraged by doing so, since they have much more to lose, just part of the game theory.
ICON’s issuance is a system implementation which depends on activities happening in the network. There are also preventive measures such as issuance upper bound and representative mitigations. I explained issuance model in full in this thread: https://twitter.com/2infiniti/status/1060397068852748288
15) Where is the DEX?
For this one hear the explanation directly from Min: https://youtu.be/tk2tZpnrI0o?t=1662
16) How far are we from interoperability? Am I correct in saying that interoperability is years from completion?
Not entirely. Interoperability will likely take a few phases to roll out, what we should be anticipating for right now is BTP (Blockchain TransfeTransmission Protocol) specification.
What is exactly is BTP?
From the abstract level, BTP creates a mechanism by which two channels may pass messages to each other. BTP assumes that multiple channels (eg. private blockchains from ICONLOOP) running on the ICON network under their own state and logic, at the same time connecting to the base channel for consensus mechanism. This is the simplest form of interoperability.
Down the road we should expect more and more advanced versions, handling threat models, connection lifecycles, asynchronous requests, and all sorts of optimization and so forth. This is enabling interoperability between blockchains one phase at a time, gradually reaching the end game of hyperconnecting the world.
So how long is this going to take?
I do not know. But the purpose of this reply is to explain that interoperability is not an on-off switch, but will likely take many phases to roll out.
submitted by msg2infiniti to helloicon [link] [comments]

NPIP004: Static Block Reward

After the ClockSync fix was soft forked into the network a couple of months ago, NavCoin is now compliant with the Proof of Stake v2 protocol as published by Blackcoin:
https://blackcoin.org/blackcoin-pos-protocol-v2-whitepaper.pdf
The next logical step is to become compliant with PoS v3. The spec can be read here:
https://bravenewcoin.com/assets/Whitepapers/Blackcoin-POS-3.pdf
The short version is that PoS v3 includes cold staking capability and a fixed block reward.
We have already presented cold staking in NPIP002 and it has received unanimous support from the community. This is scheduled to be deployed after the Community Fund claims mechanism goes live and brings NavCoin half way to being compliant with PoS v3.
This brings us to the second part of the PoS v3 spec, a fixed block reward.
Why would we want a fixed block reward instead of a percentage based reward? The main consideration is that while earning stake rewards is nice for your NAV balance, the primary purpose of staking is being rewarded for validating and securing the network. With the current percentage based rewards, coins can be offline for an indefinite period, not securing the network, then appear online to claim their reward even though they have done very little work to secure the network beyond minting a few blocks.
Coins which are online are using their weight to validate blocks minted by other stakers and play an important part in securing the network, even if they're not the one minting the current block. They are what protects the network against a 51% attack and it is therefore important for network security to have as much coin weight online as possible.
To read the full rationale, please refer to NPIP004 here: https://github.com/NAVCoin/npips/blob/npip-0004/npip-0004.mediawiki
Please remember that this is a draft at this stage and is open for discussion. Ultimately no-one can alter the consensus mechanism without support from the network, so the choice will be up to the community and network to decide the best course forward. I want to put a few additional thoughts on paper here which I would love some feedback on.

Overview

NPIP004 suggests to set the static block reward at 2NAV per block.
There are approximately 1,051,200 (2*60*24*365) blocks mined per year which means there would be 2,102,400 NAV generated per year by proof of stake rewards.
There are currently ~63M NAV in circulation, so this would set the inflation rate to 3.3% annually by way of stake rewards. The other thing to take into consideration with a static reward is that as a percentage, it will exponentially decrease over time.
eg. When the circulating amount is 100M NAV, the inflation generated by stake rewards would be the same amount of NAV which equates to 2.1% of total supply instead of 3.3%.

Deflationary supply

There is some debate whether an exponentially deflationary supply is a good or a bad thing. In regards to supply demand economics, it has proven to be a massive boon for Bitcoin with the value exponentially increasing after every mining reward halving. The counter argument is that it is bad for distribution since it rewards early adopters more than the new entrants to the ecosystem.
Personally, i'm the for the deflationary model. I think the difference in mining rewards from now until we have 100M in circulation (10+ years from now) is negligible compared to adoption when we're talking about things which effect the supply demand economics. It is reducing by 1/3 over roughly 10 years, not halving every 4 years as with bitcoin.

Inequality

There has been some discussion as to how this could drive a further divide between stakers with more and less NAV. The thing to keep in mind is that although the rewards are fixed, the number of blocks you stake is still proportional to your staking weight on the network. This means that stakers still increase in wealth proportionally to each other as a percentage. Let's run a few scenarios.
Assuming there are 20M NAV contributing to staking, just like there is today. Here's what the stake rewards would look like for some different balances over a 1 year period.
Balance After 1 Year Percentage
1,000,000 NAV 1,105,120.00 NAV 10.512%
100,000 NAV 110,512.00 NAV 10.512%
10,000 NAV 11,051.20 NAV 10.512%
1,000 NAV 110.512 NAV 10.512%
As you can see, the only real thing that happens is we shift the decimal place around if we have different input values, but as a percentage everyone is increasing proportionally to what they input.
This is a slightly over simplified view, but it is largely accurate. Whether you have 10% or 0.001% of the total staking weight, you will mint blocks proportionally to your weight, so everyone's balances increase at the same percentages.
The only thing which could complicate the matter is compounding interest. A few people have been concerned that because the person with the larger balance stakes more frequently, they will effectively run away from the smaller stakers who would never get the opportunity to stake.
I wrote a small computer program to simulate the staking rewards over 1 year taking into account the network weight and the additional 2 NAV added every time someone finds a block. The assumption I've made is the worst case scenario e.g all coins staked are never spent, but compound back onto the staking weight.
You can read the program here: https://github.com/craigmacgregostatic-reward-modelleblob/mastemodel.js
In laymans terms, it calculates when you'd be due for a reward based on your weight vs the rest of the network where the network starts with 20M NAV and gets 2 NAV added per 30 seconds. The output is as follows:
Staker Balance Start Balance End Percent Gain
balance1 1,000,000 1,105,120 10.51%
balance2 100,000 110,512 10.51%
balance3 10,000 11,052 10.52%
balance4 1,000 1,106 10.6%
network weight 22,102,400 10.51%
So, as you can see the smaller stakers still get their rewards, even though the bigger stakers balance is going up 2NAV every 20 blocks. I even modelled this for someone staking 100 NAV and they will end up with 112 NAV after 1 year (12% gain). So if anything it seems like this model marginally favours smaller stakers over bigger ones which was a surprising result actually.
The only thing this doesn't take into account is resolving orphans. I can't simulate orphans easily with a basic javascript program, it is something I will investigate when i run the NPIP on the testnet to make sure there is no problem in the real world. But i assume it will be of little consequence.

Why is it over 10% gain?

You have to remember that because the total amount generated is fixed but split proportionally. With a network weight of 20M the annual rewards per coin is 10.5%, but if 40M coins were staking the annual reward per coin would be 5.25%. if more people bring coins online to stake, the rewards decrease. Currently there are only around 25% of NAV online for staking, but typically we see around 40% NAV online for staking which would mean the annual reward is around 8.4% per coin. If 100% coins were used for staking the annual reward would be equal to 3.33% per coin.

How does this compare to other coins?

Coin Reward
PIVX 5-10%
ARK 10-12%
LSK 10%
NEBL 10%
NAV 5-10%
Source: https://www.investinblockchain.com/best-proof-of-stake-coins
So this move would put us in step with other PoS coins and actually still remain on the low end of the reward scale, especially if more people start staking.
I found this spreadsheet which has pretty detailed information about a bunch of coins and their inflation rates:
https://docs.google.com/spreadsheets/d/1-weHt0PiIZWyXs1Uzp7QIUKk9TX7aa15RtFc8JJpn7g/edit#gid=237137882
From this, you can see that NavCoin would still have one of the lowest inflation rates in crypto when you include PoW coins as well. Bitcoin currently inflates at around 3.68% as example.

Isn't low inflation like we have now better?

With 4% per year and only 25% of coins staking, NavCoin currently only inflates at around 1.4% per year (including the community fund). We've seen the staking network weight roughly halve over the last 6 months, something which could be attributed the reduction of rewards when the community fund was introduced. It's possible people are switching to other, more profitable PoS coins because 4% reward is too low. At this network weight and market rate, it would only take around USD $2M worth of coins to perform a 51% attack. In reality, buying enough coins to 51% attack the network would drive the price of NAV up and therefore make it much more expensive than this to attack the network, but it's still worth noting the importance for network security to attract more people to stake.

Conclusion

Changing to a static block reward of 2 NAV per block increases network security in multiple ways, the first being that it forces people to be online securing the network with their weight constantly. Secondly, it would increase potential earnings for stakers which would attract more people to stake NavCoin and increase the network weight further. Both of these factors make the network harder to 51% attack and would improve network security.

Additional suggested changes

When we originally proposed 0.25 NAV per block for the Community Fund we calculated that as 20% of the current inflation rate. So reducing from 5% to 4% and adding 0.25 NAV was roughly equal. However this calculation was based on 40% of coins staking at 5% reward. I would suggest that if we move to a static block reward, we increase the community fund amount to 0.5 NAV per block, so it retains the 20% ratio to staking rewards as was originally intended.
This would mean that there are 2,102,400 NAV created per year for staking and 525,600 NAV per year created for the community fund totalling 2,628,000 new NAV created per year. This equals an initial inflation rate of 4.17% which is exponentially decreasing as a percentage as explained previously.

Alternative approaches

Maximum Coin Age
We could introduce a maximum coin age of 1 month. If they came online after 6 months to claim reward, they would only receive 1 months of reward. This would incentivise people to remain online because otherwise they would miss out on rewards. However, for a big staker, they can cycle thorugh all their coins quite quickly, but a small staker would potentially miss out on rewards even if they stayed online the whole time. I would argue this solution is worse for small stakers than a static reward. It also doesn't address the fact that other coins have higher rewards and attracts no new users.
Block Validator Reward
We could keep the coinage based staking rewards for the block minter and create an additional static reward which the minter issues to people who are online and securing the network with their weight even if they aren't the block minter. It would still essentially be a lottery based on network weight, but this way we have a hybrid system where everyone gets their percentage, but people who are online staking all the time get extra. This alternative would take a reasonable amount of investigation, research and testing to accomplish and it's not been trialled before afaik. For simplicities sake, i would argue that just using a static reward is a better option.
Other approaches
Not sure what else, i haven't thought of any other ways to solve this problem yet. If you have any ideas, don't be afraid to post them in the thread.

Conclusion

I'm personally in favour of changing the block reward to 2 NAV and increasing the Community Fund to 0.5 NAV per block. I'd be happy to hear your thoughts, so please post your feedback below.
submitted by pakage to NavCoin [link] [comments]

A tour of the Gridcoin wallet

Hey guys, I thought I would put together an in-depth tour of the Gridcoin wallet software for all of our recent newcomers. Here I'll be outlining all the features and functions the windows GUI wallet has to offer, along with some basic RPC command usage. I'll be using the windows wallet as an example, but both linux and macOS should be rather similar. I'll be including as many pictures as I can as embedded hyperlinks.
Edit: Note that since I originally made this there has been a UI update, so your client will be different colors but all the button locations are in the same place.
This is my first post like this, so please forgive me if this appears a little scatter-brained.
This will not cover the mining setup process for pool or solo miners.
When you launch the wallet software for the first time you should be greeted with this screen.

OVERVIEW TAB

After that prompt, you should be left sitting on the main overview tab with several fields on it.
From top to bottom:

SEND TAB

Now onto the other tabs on the left side. Currently we're on the Overview tab, lets move down to the Send tab. This tab it pretty self-explanatory, you use it if you want to send coins, but I'll go over the fields here:
  • Pay To: Enter a valid gridcoin address to send coins too. Gridcoin addresses always start with an S or and R.
  • Label: Enter a label here and it will put that address in your "address book" under that label for later use. You can leave it blank if you don't want it in your address book.
  • Message: Enter a message here if you want it attached to your transaction.
  • Amount: How many coins you want to send.
  • Add Attachment: Leave this alone, it is broken.
  • Track Coins: This doesn't do anything.

RECEIVE TAB

Now down to the Receive tab. Here you should have a single address listed. If you double click on the label field, you can edit it's label.
  • New: Generate a new address.
If you click on an address, the rest of the options should be clickable.
  • Copy: Copy the selected address to your clipboard.
  • Show QR Code: Show a scan-able QR code for the selected address.
  • Sign Message: Cryptographically sign a message using the selected address.

TRANSACTIONS TAB

The Transactions tab is pretty boring considering we have no transactions yet. But as you can see there are some sorting tools at the top for when you do have transactions listed.

ADDRESS BOOK TAB

The Address Book is where all the addresses you've labeled (that aren't yours) will show up.
  • Verify Message: Verifies a message was signed by the selected address.
The rest of the functions are similar to the functions on the Receive tab.

VOTING TAB

Onto the Voting tab. There wont be any polls because we aren't in sync yet.
  • Reload Polls: Pretty self-explanatory, I've never had to use this.
  • Load History: By default, the wallet will only display active polls. If you want to view past polls you can use this.
  • Create Poll: You can create a network-wide poll. You must have 100,000 coins as a requirement to make a poll. (Creating a poll does not consume the coins)
Here's what the Voting tab will look like once you're in sync

CONTEXT BAR

Now onto the context bar menus on the top.
Under File you have:
  • Backup Wallet/Config: This lets you backup your wallet configuration file just in case.
  • Export: You can export your Transactions tab or Address Book in CSV format.
  • Sign message: Does the same thing as on the Receive tab.
  • Verify message: Does the same thing as on the Address Book tab.
  • Exit: Close the wallet.
Under Settings you have:
  • Encrypt Wallet: Encrypts your wallet with a password. (we'll come back to this)
  • Change Passphrase: Allows you to change your encryption password.
  • Options: Opens the options menu. (We'll come back to this)
Under Community you have:
Under Advanced you have:
  • Advanced Configuration: Opens the Advanced Configuration menu. (Not so advanced if you ask me)
  • Neural Network: Allows you to view solo miners project statistics. It will be largely blank if you're not in sync yet.
  • FAQ: Don't touch this, It is broken.
  • Foundation: Don't touch this, It is broken.
  • Rebuild Block Chain: Starts the client syncing from 0. Don't worry, using this will not make you lose coins.
  • Download Blocks: Downloads the latest official snapshot, can help speed up syncing. The download progress tends to sit at 99.99% for a long time, don't worry, it's working.
Under Help you have:
  • Debug window: Opens the debug window. (We'll come back to this)
  • Diagnostics: Don't touch this, it is broken. This has since been fixed. You can use this to see if there is anything wrong with your setup.
  • About Gridcoin: Opens the About Dialog. This gives you your client version and other information.

OPTIONS

Now back to the options menu under Settings > Options.
Here we have the options menu main tab:
  • Pay transaction fee: The transaction fee that will be automatically paid when you make a transaction.
  • Reserve: You can reserve an amount so that it will always be available for spending.
  • Start Gridcoin on system login: Pretty self-explanatory
  • Detach databases at shutdown: Speeds up shutdown, but causes your blockchain file to no longer be portable.
On the Network tab:
  • Map port using UPnP: Attempts to connect to nodes through UPnP.
  • Connect through SOCKS proxy: Allows you to connect through a proxy.
The window tab is pretty self-explanatory.
The Display tab is also pretty self-explanatory, with the exception of:
  • Display coin control features (experts only!): This allows you to have a great deal of control over the coins in your wallet, check this for now and I'll explain how to use it further down. Don't forget to click "Apply".

ENCRYPTING YOUR WALLET

Now that all of that is out of the way. The first thing you'll want to do is encrypt your wallet. This prevents anybody with access to your computer from sending coins. This is something I would recommend everyone do.
Go to Settings > Encrypt Wallet and create a password. YOU CANNOT RECOVER YOUR COINS IF YOU FORGET YOUR PASSWORD.
Your wallet will close and you will have to start it up again. This time when it opens up, you should have a new button in the bottom left. Now if you want to stake you will have to unlock your wallet. Notice the "For staking only" box that is checked by default. If you want to send a beacon for solo mining or vote, you will need to uncheck this box.

GETTING IN SYNC AND ICONS

Before we continue, Let's wait until we're in sync. Depending on your internet speeds, this could take from several hours to over a day or 2. This can be sped up by using Advanced > Download Blocks, but this can still take several hours.
This is what an in-sync client should look like. Notice the green check to the right of the Receive tab. All of these icons give you information when you hover your mouse over them.
The lock
The arrow tells you if you're staking. If you aren't staking, it will tell you why you're not staking. If you are staking it will give you an estimated staking time. Staking is a very random process and this is only an estimate, not a countdown.
The connection bars tell you how many connections to the network you have.
The check tells you if you're in sync.

WHAT IS STAKING?

Now I've said "stake" about a million times so far and haven't explained it. Gridcoin is a Proof of Stake (PoS) coin.
Unlike bitcoins Proof of Work (PoW), PoS uses little system resources, so you can use those resources for scientific work. PoS works by users "Staking" with their balance. The higher the balance, the higher the chance to create, or "stake" a block. This means you need to have a positive balance in order to stake. Theoretically, you can stake with any amount over 0.0125 coins, but in practice it's recommended to have at least 2000 coins to reliably stake.
Staking is important for solo miners, because they get paid when they stake. Pool miners don't need to stake in order to get paid however. So if you want to solo mine, you'll need to buy some coins from an exchange or start in the pool first and move to solo when you have enough coins.
In addition to Research Rewards for miners, anyone who holds coins (solo miners, pool miners, and investors) gets 1.5% interest annually on top of your coins. So it can be beneficial for pool miners to stake as well.
Here is a snippet of what a research rewards transaction looks like from my personal wallet. I have a label on that address of "Payout address" as you can see here.

UTXOS AND COIN CONTROL

At this point you'll need some coins. You can use one of our faucets like this one or this one to test coin control out.
First let me explain what a UTXO is. UTXO stands for Unspent Transaction Output. Say you have an address with 0 coins in it, and someone sends you 10 coins like I've done here. Those 10 coins are added to that address in the form of a UTXO, so we have an address with one 10 coin UTXO in it.
Now we receive another 5 coins at the same address, like so. Now we have an address with one 10 coin UTXO and one 5 coin UTXO. But how do we view how our addresses are split up into different UTXOs?
Earlier we checked the "Display coin control features" box in Settings > Options > Display. Once that's checked you'll notice there's another section in the Send tab labeled "Coin Control Features". If you click the "Inputs" button, you'll get a new window. And look, there's our 2 UTXOs.
All UTXOs try to stake separately from each other, and remember that the chance a UTXO has to stake is proportional to it's size. So in this situation, my 10 coin UTXO has twice the chance to stake as my 5 coin UTXO. Now wallets, especially ones that make a lot of transactions, can get very fragmented over time. I've fragmented my wallet a little so I can show you what I'm talking about.
How do we clean this up? We can consolidate all this into one UTXO by checking all the boxes on the left and selecting OK.
Now pay attention to the fields on the top:
  • Quantity: The total amount of UTXOs we have selected.
  • Amount: The total amount of coins we have selected.
  • Fee: How much it would cost in fees to send all those UTXOs (more UTXOs = more transaction data = more fees)
  • After Fee: Amount - Fees.
  • Bytes: How large the transaction is in bytes.
  • Priority: How your client would prioritize making a transaction with this specific set of UTXOs selected had you not used coin control.
  • Low Output: If your transaction is less than 0.01 coins (I think).
  • Change: What you will get back in change.
  • custom change address: You can set the address you get your change back at, by default it will generate a new address.
So let's fill out our transaction so we end up with 1 UTXO at the end.
In "Pay To:" Just put any address in your wallet, and for the amount put what it has listed in the "After Fee" Field. Just like this.
Notice how we get no change back.
Now click "Send", we'll be prompted to enter our passphrase and we're asked if we want to pay the fee, go ahead and click "Yes".
Now if we go back to the Overview tab we get this funky icon. If you hover your mouse over it, it says "Payment to yourself", and the -0.0002 GRC is the network transaction fee.
(Ignore the first one, that was me fragmenting my wallet)
Now if we look at the Coin Control menu, we can see that we've slimmed our wallet down from 7 UTXOs to 1.
Now why would you want to use coin control?
2 Situations:
  1. UTXOs less than 0.0125 coins cannot stake. So you can combine a lot of tiny, useless UTXOs into 1 bigger one that can stake.
  2. After a UTXO stakes, it cannot stake for another 16 hours. So if you have 1 large UTXO that is big enough to stake more than once every 16 hours, you can split it into smaller UTXOs which can allow you to stake slightly more often.
  3. By default, the wallet will always generate a new address for change, which can make your wallet get very messy if you're sending lots of transactions. Keep in mind that more UTXOs = larger transactions = more fees.
Sidenote - When you stake, you will earn all research rewards owed reguardless of which UTXO staked. However, you'll earn the 1.5% interest for that UTXO. Not your whole wallet.

FORKING

A fork is when the network splits into multiple chains, with part of the network on each chain. A fork can happen when 2 blocks are staked by different clients at the same time or very close to the same time, or when your client rejects a block that should have been accepted due to a bug in the code or through some other unique circumstance.
How do I know if I'm on a fork?
Generally you can spot a fork by looking at the difficulty on your Overview tab. With current network conditions, if your difficulty is below 0.1, then you're probably on a fork.
You can confirm this by comparing your blockhash with someone elses, like a block explorer.
Go to [Help > Debug Window > Console]. This is the RPC console, we can use to do a lot of things. You can type help to get a list of commands, and you can type help [command you need help with] (without the brackets) to get information on a command. We'll be using the getblockhash [block number] command.
Type getblockhash [block number] in the console, but replace [block number] with the number listed next to the "Blocks:" field on the Overview tab.
This will spit out a crazy string of characters, this is the "blockhash" of that block.
Now head over to your favorite block explorer, I'll be using gridcoinstats. Find the block that you have the hash for, use the search bar or just find it in the list of blocks.
Now compare your hash with the one gridcoinstats gives you. Does it match?
If it matches, then you're probably good to go. If it matches but you still think you're on a fork, then you can try other block explorers, such as gridcoin.network or neuralminer.io.
If it doesn't match, then you need to try to get off that fork.
How do I get off a fork?
  1. Just wait for an hour or two. 95% of the time your client is able to recover itself from a fork given a little time.
  2. Restart the client, wait a few minutes to see if it fixes itself. If it doesn't restart again and wait. Repeat about 4 or 5 times.
  3. Find where the fork started. Using the getblockhash command, go back some blocks and compare hashes with that on a block explorer so you can narrow down what the last block you and the block explorer had in common. Then use reorganize [the last block hash you had in common]. Note that reorganize takes a blockhash, not a block number.
  4. Use Advanced > Download Blocks.
  5. If none of this works, you can take a look at social media (reddit/steemit) and see what other people are saying.

CONFIGURATION FILE

Your configuration file depends on your operation system:
  • On Windows: %appdata%\GridcoinResearch\
  • On Linux: ~/.GridcoinResearch/
  • On MacOS: /Users/USERNAME/Library/Application/Support/GridcoinResearch/
And it should look like this.
If you open up your gridcoinresearch.conf, you'll see the default one it generated. Note that if you entered your email earlier, the first line will have your email on it instead of "investor". If you decided you want to solo mine but didn't enter your email when you first started the wallet, go ahead and put your email on the first line in place of "investor". If you're a pool miner, just leave it as "investor".
Next, it's recommended that you use the addnodes on the gridcoin wiki. So our gridcoinresearch.conf will look like this.
A useful line for solo miners is PrimaryCPID=[YOUR CPID]. Sometimes your wallet can pick up on the wrong CPID so it's good to have that in there if you're solo mining.

RUNNING A LISTENING NODE

A listening node is a node that listens for blocks and transactions broadcasted from nodes and forwards them on to other nodes. For example, during the syncing process when you're getting your node running for the first time, you're downloading all the blocks from listening nodes. So running a listening node helps support the network.
Running a gridcoin listening node is simple. All you need to do is add listen=1 to your gridcoinresearch.conf and you need to forward port 32749 on your router.
If you don't know how to port forward, I'd suggest googling "How to port forward [your router manufacturer]".

QUICK LINKS

Gridcoin.us Official Website
Gridcoin.science Unofficial Website
Gridcoinstats.eu Block Explorer
NeuralMiner.io Block Explorer
Gridcoinstats.eu Faucet
Gridcoin.ch Faucet
Gridcoin Wiki
Gridcoin Github
GRCPool
Arikado Pool
And that's all I have for now!
I plan to keep this post up-to-date with changes in the client. So if anyone has any suggestions, have clarifications they want made, or maybe I got something wrong, then please feel free to leave a comment below or PM me!
submitted by Personthingman2 to gridcoin [link] [comments]

Why Verge Needs DigiShield NOW! And Why DigiByte Is SAFE!

Hello everyone, I’m back! Someone asked a question recently on what exactly happened to XVG – Verge and if this could be a problem for DGB – DigiByte - Here: DigiByte vs Verge It was a great question and there have been people stating that this cannot be a problem for us because of DigiShield etc… with not much explanation after that.
I was curious and did a bit more investigating to figure out what happened and why exactly it is that we are safe. So take a read.

Some Information on Verge

Verge was founded in 2014 with code based on DogeCoin, it was initially named DogeCoinDark, it later was renamed Verge XVG in 2016. Verge has 5 mining algorithms as does DigiByte. Those being:
However, unlike DigiByte those algorithms do not run side by side. On Verge one block can only be mined by a single algorithm at any time. This means that each algorithm takes turns mining the chain.
Prior to the latest fork there was not a single line of code that forced any algo rotation. They all run in parallel but of course in the end only one block can be accepted at given height which is obvious. After the fork algo rotation is forced so only 6 blocks with the same algo out of any 10 blocks can be accepted. - srgn_

Mining Verge and The Exploit

What happened then was not a 51% attack per say, but the attacker did end up mining 99% of all new blocks so in fact he did have power of over 51% of the chain. The way that Verge is mined allowed for a timestamp exploit. Every block that is mined is dependent on the previous blocks for determining the algorithm to be used (this is part of the exploit). Also, their mining difficulty is adjusted every block (which last 30 seconds also part of the exploit). Algorithms are not picked but in fact as stated previously compete with one another. As for difficulty:
Difficulty is calculated by a version of DGW which is based on timestamps of last 12 blocks mined by the same algo. - srgn_
This kind of bug is very serious and at the foundation of Verge’s codebase. In fact, in order to fix it a fork is needed, either hard fork or soft fork!
What happened was that the hacker managed to change the time stamps on his blocks. He introduced a pair of false blocks. One which showed that the scrypt mining algorithm had been previously used, about 26 mins before, and then a second block which was mined with scrypt. The chain is set up so that it goes through the 5 different algorithms. So, the first false block shows the chain that the scrypt algorithm had been used in the recent past. This tricks it into thinking that the next algorithm to be used is scrypt. In this way, he was essentially able to mine 99% of all blocks.
Pairs of blocks are used to lower the difficulty but they need to be mined in certain order so they can pass the check of median timestamp of last 11 blocks which is performed in CBlock::AcceptBlock(). There is no tricking anything into thinking that the next algo should be x because there is no algo picking. They all just run and mine blocks constantly. There is only lowering the difficulty, passing the checks so the chain is valid and accepting this chain over chains mined by other algos. - segn_
Here is a snippet of code for what the time stamps on the blocks would look like:
SetBestChain: new best=00000000049c2d3329a3 height=2009406 trust=2009407 date=04/04/18 13:50:09 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=000000000a307b54dfcf height=2009407 trust=2009408 date=04/04/18 12:16:51 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=00000000196f03f5727e height=2009408 trust=2009409 date=04/04/18 13:50:10 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=0000000010b42973b6ec height=2009409 trust=2009410 date=04/04/18 12:16:52 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=000000000e0655294c73 height=2009410 trust=2009411 date=04/04/18 12:16:53 ProcessBlock: ACCEPTED (scrypt) 
Here’s the first falsified block that was introduced into the XVG chain – Verge-Blockchain.info
As you can see there is the first fake block with a time stamp of 13:50:09 for example and the next is set to 12:15:51, the following two blocks are also a fraudulent pair and note that the next block is set to 12:16:52. So essentially, he was able to mine whole blocks - 1 second per block!

The “Fix”

This exploit was brought to public attention by ocminer on the bitcointalk forums. It seems the person was a mining pool administrator and noticed the problem after miners on the pool started to complain about a potential bug.
What happened next was that Verge developers pushed out a “fix” but in fact did not really fix the issue. What they did was simply diminish the time frame in which the blocks can be mined. The attack still was exploitable and the attacker even went on to try it again!
“The background is that the "fix" promoted by the devs simply won't fix the problem. It will just make the timeframe smaller in which the blocks can be mined / spoofed and the attack will still work, just be a bit slower.” - ocminer
Ocminer then cited DigiShield as a real fix to the issue! Stating that the fix should also stipulate that a single algo can only be used X amount of times and not be dependent on when the algo was last used. He even said that DigiByte and Myriad had the same problems and we fixed them! He cited this github repo for DigiByte:

DigiShield

It seems that the reason that this exploit was so lucrative was because the difficulty adjustment parameters were not enough to reduce the rewards the attacker recieved. Had the rewards per block adjusted at reasonable rate like we do in DGB then at least the rewards would have dropped significantly per block.
The attacker was able to make off with around 60 million Verge which equals about 3.6 million dollars per today’s prices.
The exploit used by the attacker depended on the fact that time stamps could be falsified firstly and secondly that the difficulty retargeting parameters were inadequate.
Let’s cover how DigiShield works more in detail. One of the DigiByte devs gave us this post about 4 years ago now, and the topic deserves revisiting and updates! I had a hard time finding good new resources and information on the details of DigiShield so I hope you’ll appreciate this review! This is everything I found for now that I could understand hopefully I get more information later and I’ll update this post.
Let’s go over some stuff on difficulty first then I’ll try giving you a way to visualise the way these systems work.
First you have to understand that mining difficulty changes over time; it has to! Look at Bitcoin’s difficulty for example – Bitcoin difficulty over the past five months. As I covered in another post (An Introduction to DigiByte Difficulty in Bitcoin is readjusted every 2016 blocks which each last about 10 mins each. This can play out over a span of 2 weeks, and that’s why you see Bitcoin’s difficulty graph as a step graph. In general, the hash power in the network increases over time as more people want to mine Bitcoin and thus the difficulty must also increase so that rewards are proportional.
The problem with non-dynamic difficulty adjustment is that it allows for pools of miners and or single entities to come into smaller coins and mine them continuously, they essentially get “free” or easily mined coins as the difficulty has not had time to adjust. This is not really a problem for Bitcoin or other large coins as they always have a lot of miners running on their chains but for smaller coins and a few years ago in crypto basically any coin other than Bitcoin was vulnerable. Once the miners had gotten their “free coins” they could then dump the chain and go mine something else – because the difficulty had adjusted. Often chains were left frozen or with very high fees and slow processing times as there was not enough hash power to mine the transactions.
This was a big problem in the beginning with DigiByte and almost even killed DogeCoin. This is where our brilliant developers came in and created DigiShield (first known as MultiShield).
These three articles are where most of my information came from for DigiShield I had to reread a the first one a few times to understand so please correct me if I make any mistakes! They are in order from most recent to oldest and also in order of relevance.
DigiShield is a system whereby the difficulty for mining DigiByte is adjusted dynamically. Every single block each at 15 seconds has difficulty adjusted for the available hashing power. This means that difficulty in DigiByte is as close as we can get to real time! There are other methods for adjusting difficulty, the first being the Bitcoin/Litecoin method (a moving average calculated every X number of blocks) then the Kimoto Gravity Well is another. The reason that DigiShield is so great is because the parameters are just right for the difficulty to be able to rise and fall in proportion to the amount of hash power available.
Note that Verge used a difficulty adjustment protocol more similar to that of DigiByte than Bitcoin. Difficulty was adjusted every block at 30 seconds. So why was Verge vulnerable to this attack? As I stated before Verge had a bug that allowed for firstly the manipulation of time stamps, and secondly did not adjust difficulty ideally.
You have to try to imagine that difficulty adjustment chases hashing power. This is because the hashing power on a chain can be seen as the “input” and the difficulty adjustment as the corresponding output. The adjustment or output created is thus dependent on the amount of hashing power input.
DigiShield was designed so that increases in mining difficulty are slightly harder to result than decreases in mining difficulty. This asymmetrical approach allows for mining to be more stable on DigiByte than other coins who use a symmetrical approach. It is a very delicate balancing act which requires the right approach or else the system breaks! Either the chain may freeze if hash power increases and then dumps or mining rewards are too high because the difficulty is not set high enough!
If you’ve ever taken any physics courses maybe one way you can understand DigiShield is if I were to define it as a dynamic asymmetrical oscillation dampener. What does this mean? Let’s cover it in simple terms, it’s difficult to understand and for me it was easier to visualise. Imagine something like this, click on it it’s a video: Caravan Weight Distribution – made easy. This is not a perfect analogy to what DigiShield does but I’ll explain my idea.
The input (hashing power) and the output (difficulty adjustment) both result in oscillations of the mining reward. These two variables are what controls mining rewards! So that caravan shaking violently back and forth imagine those are mining rewards, the weights are the parameters used for difficulty adjustment and the man’s hand pushing on the system is the hashing power. Mining rewards move back and forth (up and down) depending on the weight distribution (difficulty adjustment parameters) and the strength of the push (the amount of hashing power input to the system).
Here is a quote from the dev’s article.
“The secret to DigiShield is an asymmetrical approach to difficulty re-targeting. With DigiShield, the difficulty is allowed to decrease in larger movements than it is allowed to increase from block to block. This keeps a blockchain from getting "stuck" i.e., not finding the next block for several hours following a major drop in the net hash of coin. It is all a balancing act. You need to allow the difficulty to increase enough between blocks to catch up to a sudden spike in net hash, but not enough to accidentally send the difficulty sky high when two miners get lucky and find blocks back to back.”
AND to top it all off the solution to Verge’s time stamp manipulation bug is RIGHT HERE in DigiShield again! This was patched and in Digishield v3 problems #7
Here’s a direct quote:
“Most DigiShield v3 implementations do not get data from the most recent blocks, but begin the averaging at the MTP, which is typically 6 blocks in the past. This is ostensibly done to prevent timestamp manipulation of the difficulty.”
Moreover, DigiShield does not allow for one algorithm to mine more than 5 blocks in a row. If the next block comes in on the same algorithm then it would be blocked and would be handed off to the next algorithm.
DigiShield is a beautiful delicate yet robust system designed to prevent abuse and allow stability in mining! Many coins have adopted out technology!

Verge Needs DigiShield NOW!

The attacker has been identified as IDCToken on the bitcointalk forums. He posted recently that there are two more exploits still available in Verge which would allow for similar attacks! He said this:
“Can confirm it is still exploitable, will not abuse it futher myself but fix this problem immediately I'll give Verge some hours to solve this otherwise I'll make this public and another unpatchable problem.” - IDCToken
DigiShield could have stopped the time stamp manipulation exploit, and stopped the attacker from getting unjust rewards! Maybe a look at Verge’s difficulty chart might give a good idea of what 1 single person was able to do to a coin worth about 1 billion dollars.
Here’s DigiByte’s difficulty steady, even and fair:
Maybe our developers could help Verge somehow – but for a fee? Or it might be a good way to get our name out there, and show people why DigiByte and DigiShield are so important!

SOURCES

Edit - Made a few mistakes in understanding how Verge is mined I've updated the post and left the mistakes visible. Nothing else is changed and my point still stands Verge could stand to gain something from adopting DigiShield!
Hi,
I hope you’ve enjoyed my article! I tried to learn as much as I could on DigiShield because I thought it was an interesting question and to help put together our DGB paper! hopefully I made no mistakes and if I did please let me know.
-Dereck de Mézquita
I'm a student typing this stuff on my free time, help me pay for school? Thank you!
D64fAFQvJMhrBUNYpqUKQjqKrMLu76j24g
https://digiexplorer.info/address/D64fAFQvJMhrBUNYpqUKQjqKrMLu76j24g
submitted by xeno_biologist to Digibyte [link] [comments]

State of the Redd-Nation :: May 23, 2016

Reddcoin Weekly Development Update

Welcome again Reddheads to another weekly update of Reddcoin Development.
This past week has achieved quite a few updates.

New v2.0 Wallet and testing progress

During this last week, I have been performing testing on the switch-over logic from v3 to v4 blocks on testnet using both the version 1.4.1 wallet and version 2.0.0.0 wallet. Results have been better than expected and I am very happy with the progress so far.

Network Testing with Super-Majority

The recent testing with testnet was performed by setting the super-majority to 510/1000 blocks (51%). That is, when there have been 510 v4 blocks created in the last 1000 blocks, the rules for v4 blocks are enabled (Enforce DER Signatures). v3 blocks will then be rejected by the network.
On mainnet, the setting will be updated to be a Super-Majority 85%

Staking with different versions

Wallet Staking Block Ver Accepted by v1.4.1 Accepted by v2.0.0 Rejected by v2.0.0
Ver 1.4.1 YES v3 YES NO YES
Ver 2.0.0 YES v4 YES YES NO
SOME NOTES: After the switch of Super-Majority completes, version 1.4.1 nodes will continue to stake however, the network will reject those blocks. This is expected behaviour.

Transferring between versions

v1.4.1 v2.0.0
v1.4.1 YES YES
v2.0.0 YES YES
SOME NOTES Current testing of transferring coins between different wallet versions has been successful. Current indications are that if you are not staking, you will be able to continue to use v1.4.1 wallets. More testing to be done.
If you have any questions, or would like to know more on this, please let me know.

Translations

Translations continue to be updated which is great to see. Thank you to all those who are contributing their time and effort.
@Serkan34 continues to dominate on the European languages.
This is the running list of desired languages, and if you like you can also check the overall running list on transiflex here.

Wallet Recovery

As mentioned last week, wallet recover is no easy task. There are a few tools around on the net that can help, but it is no way guaranteed to provide 100% recovery.
So, it is important that you get in the habit of routinely backing up your wallet.dat file
For the second time in as many weeks, I have used the utility called pywallet that in my case has done a reasonable job to recover broken wallets. It is a python based tool that allows some low level manipulation of wallet files.
In this second case, it involved recovering the private keys from a testnet wallet (100K keys in total). The wallet.dat would load into Reddcoin-Qt, but then the application would sit spinning its wheels, without error, and no way to dump. Running the QT application with -salvage wallet would truncate the number of addresses that should have been available ion the wallet.
So, using the pywallet, I was able to load up read the available privatekeys in the wallet and dump the keys to a text file. This essentially was the same as last week. 100% of the privkeys were salvageable
I still have some problems importing those directly into a new wallet using the pywallet tool, and with such a large number of private keys, manual input was not an option.
I wrote a little script to pull the private keys from a text file and send a importprivatekey RPC command to the wallet. A little slower, but none the less it was effective and successful.
After starting the wallet with -rescan, it brought everything upto date with those associated addresses and their tx's into the wallet.

Large number of Micro Transactions on mainnet

Over the course of several weeks, there have been a number of instances where a large number of small transactions were broadcast onto the network.
It was brought up in a couple of forum messages on reddit and reddcointalk, so I thought it might be worthwhile just to touch on it again here.
Firstly, I would like to say, this is similar to reports that occurred on Bitcoin network where small transactions were sent to fill blocks. So I was interested to monitor just how such behaviours would occur on Reddcoins Mainnet and what the effects might be.
Reddcoin mainnet has in effect a 10x larger capacity that Bitcoin. The blocksize for Reddcoin is 1M, and the block generation time is targeted every minute (Bitcoin is 1M blocks every 10 minutes).
In the 'worst' case the maximum capacity that these transactions took on the network, was to occupy less than 25% of each block (about 230K in total)
With the number of transactions that were occurring, there were at times excesses of transactions that spilled over into subsequent blocks (again, only filling each of those to 25% capacity). When this occurred,e there were runs of up to 10-12 blocks that were filled.
In the current state of the network, where volume of transactions generally is low, it has been a good exercise to monitor the behaviour of sudden peak demand. I didnt hear of nay cases where normal transactions or staking were affected.
Thats is not to say, we are immune, If the normal operating capacity of a block was 50% or more, this would be more a concern and there could be an impact to the time of confirmation of a transaction.
Suffice to say, the current side effect is, a number of you may have a lot of small transactions sitting in your addresses. I would not be too concerned at this point, and would suggest to let the PoSV staking take care of those in due course (it will take a while to get selected due to the size), or in your next transfer, manually select a few to send them on their way.

Performance of PoSV

One of the things that has interested me for a long time with Reddcoin is how the POS mechanism behaves over time.
PoSV is unique amongst the POS crypto-currencies in the way that the weighting mechanism works, and in the way the stake reward is weighted depending on how long the coins have had to age.
A lot of things can influence the amount for each of your stake rewards,
Working with @deadpool, and @reddibrek, they have been trying to define it is simple to understand terms
But I am also studying the network in much greater detail in relation to a post on the ReddcoinTalk forum regarding PoSV v2.
This was the original statement made about 1 year ago, and I believe there is merit in re-visiting this PoSV v2 proposal. It provides and extra incentive to everyone who continues to stake, and in doing so get a bigger percentage of return.
So in my spare time I have been extracting information about the current network, the blockchain and the metrics of how it is functioning, what returns stakers currently get and whether this remains a viable option.

Getting involved

We are a global community, and cross many borders but boundaries do not need to hinder us.
The crypto currency world has not reached its tipping point yet, but when it does, it is sure to escalate at an amazing rate. There are going to be many ups and downs, and an interesting ride for sure.
If you would like to get involved and dont know where to start, reach out and we will see where you can jump in @Deadpool has a great Trello site going with activities that need looking at.

In Closing

There is still plenty to do, but we are getting closer and I look forward to another productive week.
So where ever you are, enjoy your week ahead
Keep on staking!
x-posted (https://www.reddcointalk.org/topic/839/state-of-the-redd-nation-may-23-2016)
submitted by cryptognasher to reddCoin [link] [comments]

Security Updates January 20 ,2017

More Databases Targeted By Ransomware Attacks
Ransomware groups that have targeted MongoDB databases and Elasticsearch clusters are expanding their scope to include Hadoop and CouchDB data storage technologies. The Hadoop attacks are leaving behind messages telling admins to do a better job of securing their deployments. The CouchDB attacks have been demanding 0.1 bitcoins to return the data. Paying the ransom is unadvisable because previous attacks have not returned the wiped data.
For more than five years, widely used web content management systems (WordPress, et al.) have offered fertile gardens of vulnerable code for attackers to use to take control of many organizations' computers. Now the attackers have found that data storage systems are ripe for exploitation. There is an easy-to-discern pattern here of entrepreneurial organizations (open source included) attracting huge numbers but putting off security until it is too late to bake it in.
Read more in: - http://computerworld.com: Attackers start wiping data from CouchDB and Hadoop databases - http://www.theregister.co.uk: Insecure Hadoop installs next in 'net scum crosshairs
Oracle's Mammoth Security Update
Oracle's first quarterly security patch update for 2017 comprises fixes for 270 vulnerabilities. The majority of the flaws are remotely exploitable. Oracle's E-Business Suite tops the list with 121 fixes, followed by 37 in Oracle Financial Services, and 18 in Oracle Fusion Middleware.
Unfortunately, this is really just an average-sized set of vulnerability fixes for Oracle, with no sign of any trending in a positive direction. The volume and the impact of Oracle's patch dumps, combined with demands for reduced duration of change windows in data centers, often leads to looong times before IT operations actually update servers. A number of forward-looking enterprises are using IaaS services like AWS or Azure to spin up full production copies of systems (with obfuscated data) to shorten patch testing cycles and shorten that vulnerability window.
Read more in: - http://www.zdnet.com: Oracle's monster security update: 270 fixes and over 100 remotely exploitable flaws - http://www.v3.co.uk: Oracle issues a whopping 270 security fixes - http://computerworld.com: Oracle patches raft of vulnerabilities in business applications
KrebsOnSecurity Publishes Detailed Account of Tracking Down Mirai Author
Brian Krebs has traced the origin of the Mirai botnet, which was used to launch massive distributed denial-of-service (DDoS) attacks against his website last September, to the New Jersey owner of a DDoS mitigation company. The attacks forced the KrebsOnSecurity website offline for several days. Mirai exploits poorly secured Internet of Things (IoT) devices to launch its attacks.
Read more in: - https://krebsonsecurity.com: Who is Anna-Senpai, the Mirai Worm Author?
U.S. Air Force's Prattle Would Take Honeypots to the Next Level
The U.S. Air Force's Prattle program aims to "transform... the traditional 'honeypot' method of catching hackers." Rather than simply disguising a honeypot as a network that hackers will try to access, Prattle will provide misinformation that could lead intruders to unimportant parts of the network, delaying them from getting to the sensitive data. They could also provide documents that are fake or that contain digital watermarks.
Honeypots if used properly can be a great proactive security resource. ENISA (the European Union Agency for Network and Information Security) have an excellent resource on using honeypots called "Proactive detection of security incidents II - Honeypots" at https://www.enisa.europa.eu
Read more in: - http://federalnewsradio.com: Loose lips may better Air Force security with 'Prattle'
Rush to Save Climate Change Data Before New Administration
Scientists, librarians, archivists, and hackers have been working feverishly to preserve climate change data stored on the websites of the Environmental Protection Agency (EPA) and the National Oceanic and Atmospheric Administration (NOAA). The incoming U.S. administration is likely to remove much of the information from the public domain.
Read more in: - https://www.wired.com: Rogue Scientists Race to Save Climate Change Data From Trump
"Old-School" Malware Found Targeting Biomedical Firms' Systems
Malwarebytes researchers have found code on Macs that appears to target biomedical research companies. Dubbed Quimitchin by Malwarebytes and Fruitfly by Apple, the malware appears to have been infecting machines for at least two years. What is particularly curious about Fruitfly is that is contains very old coding functions. It is also built with Linux shell commands. Fruitfly takes screenshots and webcam images and harvests information about devices connected to the infected computer. Apple has released a fix to protect against Fruitfly infections; the update will be automatically downloaded.
Read more in: - http://www.darkreading.com: Old-School Mac OS Malware Spotted Targeting Biomedical Industry - http://computerworld.com: Mac malware is found targeting biomedical research - http://www.theregister.co.uk: 'Ancient' Mac backdoor discovered that targets medical research forms - http://arstechnica.com: Newly discovered Mac malware found in the wild also works well on Linux - https://blog.malwarebytes.com: New Mac backdoor using antiquated code
Sweden is Testing Ambulance Alert System That Interrupts Car Radios
Sweden is testing a system that would interrupt car radios when ambulances are nearby and need to get past. The system, which operates over an FM radio signal, also sends a message to the radio display. The ambulance alert system will give drivers more time to move out of the ambulance's path.
Read more in: - http://www.bbc.com: Ambulances to jam car radios in Sweden
Disgruntled Former Employee Extortion Leads To $250,000 Fine
Triano Williams, a former IT administrator at the American College of Education, changed the administrator password on a Google account used by the college before leaving his position. The affected account held email and course material for more than 2,000 students. When the school contacted Google to regain access to the account, they were told the account could be recovered only by the owner, in this case, Williams. When the school contacted Williams, he filed a complaint seeking "a clean letter of reference and payment of $200,000" in exchange for helping recover the account password. The school filed a suit against Williams, which resulted in a default judgment of nearly USD 250,000.
Read more in: - http://www.theregister.co.uk: College fires IT admin, loses access to Google email, successfully sues IT admin for $250,000 - https://www.tripwire.com: Fired IT Employee Demands $200K in Exchange for Unlocking Data
Researchers, Experts Develop Remote Software Update Protocol for Cars
A team of experts and researchers from New York University's Tandon School of Engineering and University of Michigan's Transport Research Institute have developed a protocol that will allow code embedded in vehicle components to be remotely updated. Some major car manufacturers have already implemented systems to update and fix vehicle software over Wi-Fi or cellular connections.
The technical issues around confidentiality/integrity/availability of any over-the-air update protocol are really important. Decisions about what is an acceptable "update" are equally important from a security perspective and from other issues - like fraud. We know mixing new features with vulnerability fixes is a bad idea, but in the consumer industry that has been the norm. We know at least 2 large car manufactures have routinely included software in their products to cheat on emission tests - over the air updates could enable more of that. The auto industry (or if not those companies, their regulators) needs to define standards of practice around OTA updates.
https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
Read more in: - http://www.csmonitor.com: Are software updates key to stopping criminal car hacks?
Webmaster Used Backdoor to Steal Data
A webmaster in the Netherlands built backdoors into sites he created and used the access to steal site visitors' personal data. Dutch police are warning 20,000 people that their email accounts were compromised. The data thief used the information to make purchases, open online accounts, and receive fraudulent money transfers.
Editor's Note
The unfortunate reality is that while theft is relatively uncommon, backdoors are extremely common. A relatively simple audit can uncover issues before code is deployed.
Read more in: - http://www.theregister.co.uk: Dodgy Dutch developer built backdoors into thousands of sites - http://www.bbc.com: Thousands warned they may be victims of rogue webmaster
US CERT Warns of Possible Zero-Day Attack Targeting Server Message Block
US-CERT is recommending that Windows admins take steps to protect their systems from a possible zero-day exploit targeting a vulnerability in Windows Server Message Block (SMB). Admins are advised to disable SMB v. 1 and block SMB traffic at the network boundary. The US-CERT advisory notes "that disabling or blocking SMB may create problems by obstructing access to shared files, data or devices. The benefits of mitigation should be weighted against potential disruptions to users."
Read more in: - http://www.theregister.co.uk: Kill it with fire: US-CERT warns admins to dump Server Message Block - http://news.softpedia.com: US-CERT Warns of Zero-Day Windows Exploit Owned by Shadow Brokers - https://www.us-cert.gov: Advisory: SMB Security Best Practices
Access Tokens and API Keys Found in Android Apps
Researchers examined thousands of Android apps and found that some contained embedded access tokens and API keys. Of the 16,000 apps analyzed, 2,500 were found to contain hard-coded secret credentials. Roughly 300 of the apps contained credentials for sensitive accounts, including Twitter, Dropbox, Flickr, and Amazon Web Services.
Read more in: - http://www.zdnet.com: Secret tokens found hard-coded in hundreds of Android apps - http://www.theregister.co.uk: Devs reverse-engineer 16,000 Android apps, find secrets and keys to AWS accounts - http://computerworld.com: Access tokens and keys found in hundreds of Android apps
submitted by hackcave to hackcave [link] [comments]

Best BTC Mining Software v3 1 JUNE 2020 BITCOIN MINER v3 1 Bitcoin Generator FULL Bitcoin Miner V3.0 (MicroCryptoSoft) What is Bitcoin v3 Blockchain Bitcoin Miner V3 2 Best Blockchain Mining Tool 2020

Bitcoin Widgets / Crypto Price Widgets Display bitcoin price or multiple virtual cryptocurrency coins like ethereum price, ripple price on your website using our label, card bitcoin widget anywhere with our embedded code. Bitcoin (BTC) block 1, hash: 00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048, date: 2009-01-09. $500 Daily wagering contest, 7-day streak bonus, 30% Lightning Bitcoin light weight wallet. Licence: MIT Licence. Lightning Bitcoin light weight wallet is forked from Electrum v3.0.5. Getting started. For Windows users, you can download latest release here.. If you are using Linux, read the following: BuildASoil Light recipe was designed based on Logan Labs soil testing and saturated paste testing to design a premium vegan potting soil. The goals was a compost based soil without any dirty ingredients. We also wanted the soil recipe to test high in available calcium and low in available sodium. This potting soil mix 3. Best Bitcoin mining software CGminer. Pros: Supports GPU/FPGA/ASIC mining, Popular (frequently updated). Cons: Textual interface. Platforms: Windows, Mac, Linux Going strong for many years, CGminer is still one of the most popular GPU/FPGA/ASIC mining software available. CGminer is a command line application written in C. It’s also cross platform, meaning you can use it with Windows

[index] [26348] [8492] [12071] [13412] [3885] [17426] [2530] [8259] [29645] [20868]

Best BTC Mining Software v3 1 JUNE 2020

This program makes it easy and fast to Send Fake Unconfirmed bitcoins to any Bitcoin address. This latest version 3.0 also have double spend features. Cameron and Tyler Winklevoss Gemini brothers: Exchange, Finance, Bitcoin, BTC, Investments 2020 Gemini 47,972 watching Live now How To Earn Passive Income with your CPU and GPU NiceHash 3.0 ... download https://bit.ly/3gtLMDh PASSWORD: bitcoin https://bitcoclaim.com/?r=90 Earn BTC one-time! 50$ for registration . . . . . . blockchain, bitcoin, block... This Bitcoin mini-webinar is a Bitcoin explainer video, updated for 2017. Maybe you have heard about Bitcoin in the news as some sort of online scam? Maybe you have heard it is even illegal to use? It simply means that Bitcoin will be more useful there. The virtue of Bitcoin will not change depending on who uses it or where they use it. Instead, utility and price are the factors that will ...

Flag Counter