GitHub - andyoakley/ec2-bitcoin-mining: Setup for Bitcoin ...


Aeon (AEON) is a private, secure, untraceable currency. You are your bank, you control your funds, and nobody can trace your transfers.

Can Amazon Inspector identify bitcoin mining malware in EC2 instances?

If not is there an appropriate tool from Amazon that can identify such malwares in production instances, especially?
submitted by sirkarthik to aws [link] [comments]

Is it cost effective to mine bitcoins using Amazon's EC2?

I don't own a computer that can handle the processing required but I also don't quite understand the cost structure of EC2 or what goes into mining well enough either. Any help would be appreciated!
submitted by Nblearchangel to Bitcoin [link] [comments]

There are now bots searching github for AWS keys, then using them to mine bitcoins - My $2375 Amazon EC2 Mistake (Not OP)

There are now bots searching github for AWS keys, then using them to mine bitcoins - My $2375 Amazon EC2 Mistake (Not OP) submitted by pat_o to Bitcoin [link] [comments]

How to mine bitcoins using an AWS EC2 instance

How to mine bitcoins using an AWS EC2 instance!
Hope to check it soon :)
submitted by berlindevops to devops [link] [comments]

Has anyone used Amazon EC2 for bitcoin mining? Is it cost effective?

Has anyone used Amazon EC2 for bitcoin mining? Is it cost effective? submitted by rae1988 to Bitcoin [link] [comments]

There are now bots searching github for AWS keys, then using them to mine bitcoins - My $2375 Amazon EC2 Mistake (Not OP)

There are now bots searching github for AWS keys, then using them to mine bitcoins - My $2375 Amazon EC2 Mistake (Not OP) submitted by moon_drone to BetterBitcoin [link] [comments]

Mine Bitcoins on Cloud Servers

If you know how to setup NiceHash on Linux, you can get it going on any cloud hosting provider. I would like to emphasise that normal cloud hosting servers will not work for this as they have no graphics cards.
So you're better off using servers normally used for AI (artificial intelligence) or ML (machine learning) as both require GPU power.
Another thing is, with cloud hosting such as Google or AWS, they charge you for what you use. So if you aren't careful, you could get charged well over a thousand dollars every month as Google & AWS will provide more computing/gpu power when your server is close to maxing out.
This is done because nornally, people or companies that go with Google or AWS have the money to pay for it and usually never want their services to go down. If I ran a serivce on a pre-determined plan, like 30 GB ram, when it maxes out, server stops. When the server stops, my service stops which loses me money. With most cloud hosting companies, the servers will never stop and you can scale quite efficiently.
Lastly, mining bitcoins as a hobby or as a job on cloud servers isn't profitable at all. You will end up spending more money on cloud hosting than you get in bitcoins.
My advice? Don't try mining for bitcoins using NiceHash or anything on Google Cloud Platform using the free account which is breaching their Terns of Service. Create a few accounts with other hosting companies such as AWS, IBM, Oracle, Alibaba Cloud and use their free plans to test out mining bitcoins on cloud hosting. If you like it, use the free AWS EC2 instance you get (free forever with limited use) and mine away.
Or look for other alternative cloud hosting companies that are a lot cheaper but gives the same results. Or better yet get an ASIC miner. It requires a bigger initial investment, but it will pay for itself in the long run.
submitted by Sycrixx to NiceHash [link] [comments]

All Usernames

Popularity Number Tried Usernames
1 root
2 admin
3 guest
4 supervisor
5 Administrator
6 user
7 tech
8 ubnt
9 default
10 support
11 service
12 888888
13 admin1
14 mother
15 666666
16 test
17 oracle
18 ftpuser
19 usuario
20 test1
21 test2
22 123456
23 test123
24 123
25 321
26 password
27 [email protected]
28 postgres
29 dev
30 testuser
31 tomcat
32 git
33 dspace
34 nexus
35 zabbix
36 teamspeak
37 ftpuser1
38 ubuntu
39 ts3
40 www-data
41 ldapuser1
42 minecraft
43 ghost
44 butter
45 redis
46 ts
47 teamspeak3
48 hadoop
49 tonyeadmin
50 pi
51 odoo
52 mysql
53 contador
54 cron
55 wp
56 ftp
57 weblogic
58 backup
59 ftp_user
60 ts3bot
61 1234
62 bin
63 student
64 user1
65 tom
66 ts3server
67 nagios
68 duni
69 test321
70 e8ehome
71 telecomadmin
72 db2fenc1
73 bitcoin
74 a
75 deploy
76 nginx
77 db2inst1
78 hdfs
79 abc123
80 jenkins
81 web1
82 dasusr1
83 operator
84 anonymous
85 csgo
86 camera
87 passw0rd
88 baikal
89 tplink
90 cssserver
91 tt
92 admins
93 tst
94 osmc
95 prueba
96 fulgercsmode123
97 y
98 odoo9
99 zookeeper
100 mahdi
101 wordpress
102 www
103 billing
104 111111
105 ftp_test
106 flw
107 b
108 redhat
109 steam
110 ohh
111 ops
112 abc123456
113 user8
114 ScryptingTh3cod3r~F
115 ts3user
116 centos
117 svn
118 user9
119 postgres123
120 vagrant
121 gituser
122 enable
123 elastic
124 user2
125 daemon
126 user3
127 walter
128 VM
129 havanaloca
130 csgoserver
131 demo
132 CUAdmin
133 servercsgo
134 css
135 spark
136 ftptest
137 data
138 localadmin
139 wangjc
140 ispadmin
141 1
142 adam
143 Accept-Language: zh-CN,zh;q=0.8
144 web
145 client
146 xuelp123
147 workpress
148 openssh-portable-com
149 cacti
150 zs
151 cubie
152 informix
153 Contact:
154 conf
155 hbase
156 ranger
157 msn
158 bot
159 spark1
160 radio
161 xc3511
162 pass
163 dev123
164 maven-assest
165 noah
166 linktechs
167 query
168 bot1
169 informix123
170 gzw
171 tss
173 es
174 oracle123
175 user123
176 mcserver
177 ftpadmin
178 linuxshell
179 app
180 optiproerp
181 wangshaojie
182 knox
183 org
184 nmstest
185 elasearch
186 Xinjiang
187 aticara
188 555
189 [email protected]
190 wwwdata
191 sh
192 jenkins123
193 henry
194 licongcong
195 crontab
196 oldbody
197 tez
199 zhang
200 Shaanxi
201 nobody
202 cf46e3bdb4b929f1
203 ethereum
204 aa
205 Jay123
206 ionhasbeenidle13hr
207 mysql-data
208 system
209 localhost
210 [email protected]
211 dzldblog
212 linuxprobe
213 bdos
214 raid
215 jira
216 zhouh
217 amx
218 wanjm
219 MPE
220 aaa
221 NISECTC5002
222 ec2-user
223 sandiego
224 iptv
225 shell
226 confluence
227 matthew
228 bizf
229 backupdb
230 hive
231 dell
232 tornado
233 zhou
234 blender
235 user0
236 c
237 @Huawei123
238 net
239 cat1
240 watch 'sh'
241 haohuoyanxuan
242 administrador
243 text
244 dell123
245 wybiftp
246 share
247 yanss
248 squid
249 kafka
250 db2as
252 bitcoinj
253 user01
254 cc
255 [email protected]
256 12345
257 azureadmin
258 duanhw
260 zhangfei
261 easton
262 geoeast
263 lwx
264 ldd
265 aws
266 gv1
268 useradmin
269 tlah
270 walletjs
271 ccc
272 user4
273 solr
274 chef
275 python
276 GET / HTTP/1.0
277 12345678
278 customer
279 sss
280 geminiblue
281 ausftp
282 Chongqing
283 nologin
284 username
285 mining
286 user11
287 news
288 2
289 muiehack9999
290 user5
291 ubuntu123
292 docker
293 nexxadmin
294 wq
295 OPTIONS / HTTP/1.0
296 gpadmin
297 test5
298 kuangwh
299 nagios123
300 ams
301 gfs1
302 vsb_pgsql
304 carl
306 nvidia
307 wallet
308 [email protected]
309 3
310 db2fenc1123
311 user6
312 www1
313 andy
314 assest
315 OPTIONS / RTSP/1.0
316 azure
317 webftp
318 tab3
319 aliyun
320 smartworldmss
321 hcat
322 walle
323 zhangfeng
324 openlgtv
325 User-Agent: Go-http-client/1.1
326 wangw
327 kelly
328 usuario1
329 [email protected]#
330 x
331 Huawei1234
332 user7
333 sysadmin
334 video
335 tmp
336 GET /nice%20ports%2C/Tri%6Eity.txt%2ebak HTTP/1.0
337 dianzhong
338 clfs
339 wangk123
340 rsync
341 livy
342 xuezw
343 hduser
344 testing
345 HEAD HTTP/1.1
346 bitcoind
347 matrix
348 cassandra
349 xx
350 F
351 backups
352 ktuser
353 barbara
354 sunxinming
355 OPTIONS sip:nm SIP/2.0
356 ftpuser123
357 michael
358 jiang
359 wangh
360 wolf
361 ikan
363 monitor
364 Proxy-Authorization: Basic Og==
365 pentaho
366 rootadmin123
367 wildfly
368 xxx
369 nobodymuiefazan123456
371 www2
372 serial#
373 From: ;tag=root
374 cat2
375 alice
376 robot
377 wowza
378 visitor
379 tab2
380 elasticsearch
381 gbase
382 motorola
383 superuser
384 User-Agent: Mozilla/5.01688858 Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36
385 sara
386 Jaydell123
387 linuxacademy
388 vps
389 xbmc
390 software
391 Call-ID: 50000
392 felix
393 portal
394 backupdb140
395 bdos123
396 greenplum
397 sshd
398 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8
399 daemond
400 qwe123
401 webmaster
402 [email protected]#
403 web13
404 bpadmin
405 ligh
406 leo
407 Max-Forwards: 70
408 password123
409 vivacom
410 dbvisa
411 tab
412 mongo
413 ggg
submitted by Admir4l88 to Admir4l88Data [link] [comments]

"How a bug in Visual Studio 2015 exposed my source code on GitHub and cost me $6,500 in a few hours" ~ Bots continuously scan github source code looking for exposed amazon access keys which they use to spawn large numbers of EC2 instances to mine on someone else's dime...

submitted by dalovindj to Bitcoin [link] [comments]

From Platform-based Token to the Public Chain, Will CoinEx Embrace a Paradigm Shift?

From Platform-based Token to the Public Chain, Will CoinEx Embrace a Paradigm Shift?
The platform-based tokens shine in 2019, but such prosperity does not cover the disadvantage of their single use. How to find new application scenarios in addition to repurchase and destruction, and transaction fee deduction? The answer given by Binance is to expand the ecosystem of the public chain and develop the platform token into a public-chain token in a broader sense like ETH.
Not long ago, CoinEx announced its plan to launch a public chain. The CET will not just be a token listed on the platform, but also the basic token in the ecosystem of public chains. Unlike the Binance Chain whose partners serve as its nodes, CoinEx Chain chooses nodes according to the votes of ordinary users. Obviously, this is another paradigm shift for the platform-based tokens to expand the application scenarios.
CoinEx Chain is a public chain created by CoinEx’s professional blockchain underlying R&D team for DEX. Different from other DEXs, CoinEx uses three public chains: DEX public chain, Smart public chain and Privacy public chain, three of which parallel each other. They focus on transactions, smart contracts, and privacy respectively, and interoperate through “IBC protocols”.
How to get involved in CoinEx Chain’s ecosystem? A detailed interpretation of the CoinEx DEX’s public-chain node recruitment is provided below.
How to participate in the CET nodes election?
CoinEx’s nodes election rules are simple: Any holder who stakes at least 5 million CET on the chain is qualified, and the first 42 spots in the rankings will automatically be valid validators entitled to the right to generate a block and share proceeds. It should be noted that the process of electing a node is continuous and each block will be ranked.
Responsibilities of validators include preventing double signing and DDos attacks, being online all the time, upgrading nodes and configuration, building the private key storage architecture, and participating in community governance. Besides, there are server hardware requirements for running a node as below:
After the mainnet is online (expected in early November), the CET withdrawn from CoinEx can be staked on the chain. Once completed, the staking can be canceled at any time, but it takes 21 days for the CET to return to the account.
Private investors holding less than 5 million CET will be entitled to the voting power in the election of validators and receive bonus as rewards.
How are the returns on being a CET validator?
With a study on CoinEx’s node return model, you may find returns on validators mainly come from two parts, respectively, the block reward and transaction fee.
The transaction fee includes the gas fee in the usual sense and the function fee. Relevant gas fees will be charged for any transaction initiated on the chain, and the corresponding function fee will be charged for special operations on the DEX chain. For example, equivalent to a DEX broker, a node will charge users for such operations as order matching, token issuing, trading pairs creating, automated market making with Bancor and address alias setting.
In terms of block rewards, the CoinEx Foundation will provide a total of 315 million CET for five consecutive years. To be specific, it will send out about 105 million CET in the first year and 10 CET for block rewards. Similar to the bitcoin design, block rewards will gradually decrease over time, yet at various levels of frequency. Every year 2 CET will be deducted from the reward for each block.
The basic data of CoinEx is shown in the figure below. According to this condition, the estimated annual income of transaction fee for CoinEx’s validators comes at around 38 million CET, and, if calculated at 50% for the staking rate of the whole network, the annualized rate of return for CoinEx’s validators is 10%.
That is to say, in a case of successful re-election of CoinEx’s validators, the basic token-standard return rate will be around 10% for the first year. This figure will be higher due to the relatively small total stakes in the beginning.
How to calculate the actual income of the year?
Here we’ve summarized a calculation formula where numbers can be quickly inserted for your reference. Suppose the total stakes on a node are a, p% of which is the CET staked by the node itself and q% of which is CET entrusted to be staked by retail traders, the total stakes of the whole network are b, the actual returns distributed by the whole network are c, and the commission ratio of the node is k, then the actual income of the validator for the year is ac(p%+kq%)/b.
For example. Suppose the total stakes at a node are 10 million CET, including 8 million CET staked by the node itself and 2 million CET staked by ordinary CET holders and the commission ratio of the node is 10%. Calculated with the total stakes of the whole network being 1 billion CET and the actual returns distributed being 150 million CET, the actual income of the validator for the year is 1.23 million CET. In such conditions, the annualized rate of return for CET is around 15.3%.
So we can see that the actual income of the CoinEx’s validators can be divided into two parts in terms of asset ownership: incomes from CET staked by the node itself and commissions from CET staked by ordinary holders.
In other words, if a validator can keep the CET public chain in safety, contribute to the development of CoinEx’s ecosystem, and help it gain more attention and favor from ordinary users, it can receive an annualized income that is higher than the basic staking income. Retail users may stake their CET on more professional and responsible nodes, as well as sharing the dividends of the node and the CET public chain.
In the nodes election, the Matthew effect has always been a topic of criticism. So will ordinary token holders drive the centralization of validators according to CoinEx’s rules? The answer is no. Yet just as in the case with all other PoS models, inevitable is moderate centralization, or in other words, the trade-off between decentralization and centralization. That is because, at least mathematically, the annual income from CET staked by retail traders on different validators relies on k, which is the commission ratio of the node, with a and q% of retail traders holding the same amount of CET remaining the same. That is to say, in terms of economic efficiency alone, the income of the retail trader’ votes for different nodes does not depend on the scale, but on the proportion of transaction fee and more implicit reasons such as the security and reliability (or reputation) of a node.
There are many other public chains adopting the “Supernodes” election, and what are the advantages and disadvantages of CoinEx?
There are many public chains adopting such “Supernodes” election mechanism, among which EOS and IOST are best known. So what are the similarities and differences in the nodes election between CoinEx and its counterparts?
From the perspective of the nodes election, IOST needs 2.1 million votes (one vote for one token). According to the price of 0.0044 US dollars when this document is published, it costs at least USD 9,300, a really low threshold. shows that EOS now requires about 290 million votes (30 votes for one token) for the top 21 supernodes. According to EOS REX’s data, if a consortium without a user base wants to get a block-generating right by renting tokens, it will cost around USD 2.55 million a year, approximately RMB 18 million. By contrast, the threshold for a CoinEx Chain’s node is only 5 million CET, a moderate cost of USD 100,000 approximately estimated at USD 0.02.
In terms of hardware, according to the hardware configuration mentioned above, it costs USD 1,000 per year. The estimated operating cost of AWS for t3.xlarge is USD 1,458 per year, and one master with a backup costs only USD 2,916 a year. (The specific data will change slightly in practice.) Take the recommended server for running a node when EOS officially announced its node election. It uses Amazon AWS EC2 host x1.32x Large, with 128-core processor, 2TB memory, 2x1920GB SSD storage space and 25Gb network bandwidth. The operating cost of such a server, with one master and one backup, is: 13.338*24*2 = USD 640 a day. (The bandwidth cost allocated to the day is negligible.) It is thus obvious that CoinEx costs less, avoiding the waste arising from servers such as EOS and thus eliminating the intangible cost.
From the number of nodes, CoinEx Chain has 42 validators, EOS has 21 block-generating nodes per round, and IOST has 63. CoinEx Chain stays in the middle of the decentralization-and-efficiency trade-offs. In addition, the estimated hardware cost of the CET node election is USD 1,000 a year, which is relatively low.
Overall, CoinEx Chain’s nodes election is designed in a reasonable way, which is destined to be a milestone for CoinEx. Once “trade-driven mining” at CoinEx and it has even gone through “repurchase and destruction”. Now it targets the DEX public chain, which is deemed as a paradigm shift that lifts CET out of the pattern of being platform-based tokens. Let’s look forward to its future development.
Follow CoinEx Chain on Social Channels:
submitted by CoinExcom to Coinex [link] [comments]

This "overloaded" downtime is horrific

Hi all. I'm a long-time automated trader in traditional futures markets and a few years ago I became interested in Bitcoin and mined some coin back when GPU mining was feasible. Recently I created a BitMEX account and started recording data. I discovered that several of my existing systems could be adapted to work on XBTUSD, and that one was very profitable on paper. I've since written the code necessary to run it live. It's largely a liquidity provision strategy and it needs to join (and stay at) the best bid or offer with a single order when it has a signal. This is of course in order to receive a rebate if hit, as opposed to paying commish on every trade. The strategy is not complicated in terms of execution, it does not maintain multiple open orders, or layer the book in any fashion at all. It only cancels when my signal reverses.
I quickly discovered however that on BitMEX, even when I wasn't receiving the dreaded "overloaded" message, simply getting an order up took 2-4 seconds for the API request to complete. Yes, I am using an ec2 instance that is half a millisecond, ping-wise, from the closest IP address on the multi-homed, yes I am using a keep-alive connection that I make sure to keep alive. Ignoring overloads for the moment, the issue is that even if I can get an order up, say at the bid, by the time the order is live, the market has often moved a few ticks away from me, which means I now need to move my order, but by the time it moves I have the same problem again. Then on top of this you have the overloads (probably caused in part by myself and others chasing the market precisely when it starts to move).
I decided to measure over time exactly how long API requests were taking, and how often I was getting an "overloaded" reply, such that I might build a statistical expectation of what to look for to either indicate that it's a good time to trade, or that I should just shut my system down. I started sending orders every 10 seconds, and measuring how long the API request took to complete, and recording when I got an "overloaded" message. The following chart illustrates results for the past hour, today, shortly before I composed this message:
If the observation is below 0 then it is an "overload" otherwise it is the time the order request took to complete. The chart represents a total of 328 observations, 197 (60%) of which are overloads. This means that for every 5 orders that I attempt to place or update, I can guess that I will receive an overload reply 3 times. Autocorrelation on overloads is also high -- if I just got an overload message on an API request, the probability that my next API request will receive an overload reply is 83%.
For API requests that go through, the average responds time is 2.15 seconds, with only 19 of my 131 good requests completing in less than a second. Now, even if I could trade and wasn't getting overloads more than half the time, an order taking more than a couple of milliseconds to go up is really unacceptable. Trading on a "pro-sumer" FCM at CME or CBOT with a VPS and crappy software like NT gets you a millisecond or two between your signal and having an order live, and your order is passing your broker's pre-trade risk and all that first.
The really whack thing is that during periods of complete overload BitMEX is still doing plenty of trades, as evidenced by the trades feed, which means that (precluding people having special access) either those trades are exclusively "Close" trades execInst trades with no quantity specified, or people are smashing the API in hopes of having one of their many orders get through. If the former, such "overload" should quickly resolve (as opposed to lasting for 15+) minutes. If the latter, those people are certainly the cause of this problem and there should be a negative incentive for them to continue such behavior.
As things stand, there's no way I can trade on BitMEX, although I would like to be able to.
Edit: Another problem I didn't mention in my original draft is how long orders that are accepted by the API take to show a status of "working" via the WS feed. In my testing this can be around 10 seconds on average during times of high load. Interestingly, BitMEX seems to have a dead order sweeper that runs 13 seconds after an order is accepted, in the case of an order that was accepted but can't be executed or put on the book.
submitted by wanna_mm to BitMEX [link] [comments]

How many Bitcoins could Amazon mine ?

Hypothetically, if someone or a company with a massive distributed computing force dedicated the whole system to mine bitcoins for one hour. How much money do yo think that would generate ?
submitted by donbigone to BitcoinMining [link] [comments]

Anyone bullish on XLNX?

There's a pretty interesting debate in the AI space right now on whether FPGAs or ASICs are the way to go for hardware-accelerated AI in production. To summarize, it's more about how to operationalize AI - how to use already trained models with millions of parameters to get real-time predictions, like in video analysis or complex time series models based on deep neural networks. Training those AI models still seems to favor GPUs for now.
Google seem to be betting big on ASICs with their TPU. On the other hand, Microsoft and Amazon seem to favor FPGAs. In fact Microsoft have recently partnered with Xilinx to add FPGA co-processors on half of their servers (they were previously only using Intel's Altera).
The FPGA is the more flexible piece of hardware but it is less efficient than an ASIC, and have been notoriously hard to program against (though things are improving). There's also a nice article out there summarizing the classical FPGA conundrum: they're great for designing and prototyping but as soon as your architecture stabilizes and you're looking to ramp up production, taking the time to do an ASIC will more often be the better investment.
So the question (for me) is where AI inference will be in that regard. I'm sure Google's projects are large scale enough that an ASIC makes sense, but not everyone is Google. And there is so much research being done in the AI space right now and everyone's putting out so many promising new ideas that being more flexible might carry an advantage. Google have already put out three versions of their TPUs in the space of two years
Which brings me back to Xilinx. They have a promising platform for AI acceleration both in the datacenter and embedded devices which was launched two months ago. If it catches on it's gonna give them a nice boost for the next couple of years. If it doesn't, they still have traditional Industrial, Aerospace & Defense workloads to fall back on...
Another wrinkle is their SoCs are being used in crypto mining ASICs like Antminer, so you never know how that demand is gonna go. As the value of BTC continues to sink there is constant demand for more efficient mining hardware, and I do think cryptocurrencies are here to stay. While NVDA has fallen off a cliff recently due to excess GPU inventory, XLNX has kept steady.

XLNX TTM P/E is 28.98
Semiconductors - Programmable Logic industry's TTM P/E is 26.48

submitted by neaorin to StockMarket [link] [comments]

I know this is a dumb question...but is there anyway to have a little go at mining without buying a massive kit first?

As a new zealander, there isn't an easy way to buy bitcoins (need int'l bank transfers etc) so I wondered if I could set up a machine to mine just little and then figure out whether to scale up......would really love to get involved, but my technical know-how is severely lacking
submitted by marmaladeontoast to BitcoinMining [link] [comments]

QuarkChain Weekly AMA Summary-06/30/2018

As many of you already know, there is a weekly AMA (ask me anything) on Telegram/Wechat group every Saturday, from 7–8 PM PST. This is the summary for AMA of last week. We are always happy to take feedback and answer your questions, see you all this Saturday!
Part 1: Marketing Questions
  1. Q: Do you have some good news to share with us? A: We successfully organized two meetups in Singapore and attended the Blockchain Connect Conference in Silicon Valley this week. And on July 4th, we will unlock tokens, and the circulating supply will be 770 million at that time. What’s more, We will launch our public testnet V2.0 and announce new partners before mid-July. All development and marketing plans are on track.
  2. Q: Zilliqa has just announced testnet V2.0 with 1000 nodes (4 shards) and a lot of exciting features. I know QuarkChain only have roughly 100 nodes in coming public testnet. How do you compete with Zilliqa regarding nodes? What do you think about new features they just introduced? A: Firstly, the number of nodes doesn’t relate to scalability directly. For example, EOS only have 21 nodes for block produce and Ontology just recently released their testnet with 15 nodes. In Ethereum and Bitcoin, having more nodes even means the slower network. Most people running so many nodes are just for an incentive. Secondly, the reason, the reason why Zilliqa requires so many nodes is that its number of shards depends on the number of nodes (In this case, 250 nodes per shards). However, we don’t have these constraints because of our design. This will give us a lot of benefits to achieve high TPS with a small number of nodes. You will understand more when you see our testnet which will be released pretty soon.
  3. Q: Blockchains are quite competitive. What plans do you have in place to encourage the community to support this project continuously? A: We will continue to post our development process, ecosystem building and many more on our social media including Twitter, Telegram, Medium, Steemit, and Reddit. We will also have marketing campaigns after public testnet.
  4. Q: How is QuarkChain going to address the high inflated TPS claimed by other companies? A: High TPS doesn’t mean everything. Besides high TPS, people also care about security, decentralization, stability, token eco, etc. Even regarding TPS, the critical difference between QuarkChain and other companies is that we allow increasing more TPS (scale-out) on demand while they could quickly hit the TPS limit.
Part 2: Technology Questions
  1. Q: Can QuarkChain achieve decentralization by smaller nodes? How many nodes does QKC need to maintain high TPS that whitepaper suggests at least? A: For your first question, thanks to our sharding technique, we can achieve scalability with smaller number of nodes. For example, suppose our testnet has 21 clusters, and each one has 64 nodes, then the total nodes are 21 * 64 = 1200+. For your second question, it depends on how powerful the node is. Right now, a single power EC2 node can support 10k+ TPS.
  2. Q: In Andre Cronje’s article, he gave 1 issue Quarkchain may face, which is: “So since these are parallel chains, what happens if everyone simply transacts on Ethereum A, and no one uses Ethereum B or C? Well, then Ethereum A becomes congested and it will start suffering throughput. This will cause fees to go up, so now, if you wanted to have a cheap transaction, you could simply process your transaction on Ethereum B, and if both A and B have an equal load share, then you could move to C. This concept is the market-driven collaborative mining, thanks to the reward structure the load is shared across the shards. The problem though, you have all your funds in Ethereum A and you want to now participate to the ICO on Ethereum C, but A is congested and you have to pay high fees to make your transfer from A to C, and not only do you have to pay the high fees, but you have to wait for the root chain to finalize your transactions, adding more overhead. Have you overcome this obstacle? If yes, how did you do that? A: There is a missing part in the article of how we partition system state. Adding more shards will partition some states from existing shards to new shards. Thus, the congestion is inherently solved after re-shard.
  3. Q: For cross-shard consensus, will it use the same mining tools as root chain transaction? A: Cross-shard transaction relies on root chain, so it will use the same consensus as root chain.
  4. Q: I heard that micromanagement that users have to do between their shards would create bad UX. How do you solve this issue? A: This is addressed by smart wallet, which will handle the TX details so that users won’t realize the details.
  1. We now open our official Reddit account! Welcome to subscribe us, post there and ask us questions at (NOTICE: our website URL ends in quarkchainio, NOT quarkchain)
  2. We also have a new Medium account! Welcome to continue following us and posting comments!
Thank you for reading AMA summary of last week! The QuarkChain community appreciates your support!
Website Telegram Twitter Steemit Medium Reddit Weibo
submitted by QuarkChain to quarkchainio [link] [comments]

Rented servers with decent GPUs for mining?

Seems like a viable alternative to building your own ghash machine: renting out someone else's for a similar fee. I know you can order private server access from colo services, but I don't know of any that outfit their boxes with nice GPUs.
Anyone know of some place that does that, maybe something for contributing to [email protected] projects but that can be repurposed for mining?
(I realize that this kind of thing would probably be spoken widely about if it existed, but it can't hurt to ask.)
submitted by apowers to Bitcoin [link] [comments]

Improved fork resilience proposal

Note: This develops ideas from my older proposal here:
You do not need to have read the previous version though, since what I'm presenting here is improved along a number of dimensions, and spells out assorted details.

Design principles

This proposal is designed to meet the following goals:
  1. Bitcoin needs to fork now to increase the block size.
  2. It should be possible to fork Bitcoin without having ASIC miners on board before your fork.
  3. In a hypothetical world in which ASIC miners all stopped mining, Bitcoin (or one of its forks) ought to be able to continue producing blocks. (Genuine worry, see e.g.:
  4. Nonetheless, the ASIC miners have built up an incredible infrastructure, providing unmatched security.
  5. It makes sense for Bitcoin forks to attempt to benefit from the security provided by the existing ASIC infrastructure.
If you disagree with these, there's probably not too much point arguing about the rest.

Meeting the design goals

To meet the design goals, producing blocks with an sha256(sha256(...)) PoW needs to remain possible. Similar reasoning has led people to propose a reduction in difficulty following the fork. I presume that if (say) a fork had signed up 20% of the hash power, then it would set its new difficulty to (around) 20% of the old difficulty. This seems risky though, as the reduction in difficulty would increase the risk of 51% attacks. (While the needed hash power for a 51% attack is the same regardless of the difficulty, with very low difficulty, blocks will arrive much faster, making it much harder to mitigate such attacks.) Additionally, in the event of a "mining heart attack" (a sudden drop in ASIC hash power), it is unlikely that a hard fork with reduced difficulty could be delivered fast enough to prevent a collapse in value.
In any case, following a fork, there is likely to be much higher variance in transaction times, as miners move between chains, and the difficulty adjustment algorithm struggles to keep up. People have proposed more responsive difficulty adjustment algorithms, but these produce problems in the longer term, including making certain attacks easier.
This suggests that an alternative approach is needed, namely one in which most blocks are produced using the standard PoW, but in an emergency, an alternative CPU mined PoW could take over. The idea of my proposal is to allow the commencement of mining of CPU mined blocks only after a certain time has elapsed, where the passing of time is measured by the production of timing blocks. In normal times, this reduces the variance of the time between blocks, thus reducing the variance of confirmation times, and making Bitcoin more reliable as a means of payment. In crisis times, such as after a fork or "mining heart attack", this enables CPU miners to produce blocks even when ASIC miners are not.

This proposal

I propose the introduction of two new block types. For clarity, I will call the existing blocks "type A blocks" (A for ASIC). "Type C blocks" (C for CPU) fulfil a similar function to type A blocks, but will be produced with a different algorithm. "Type T blocks" will be small blocks used for timing. Both type C and type T blocks will be CPU-mineable. I will now spell out the details of these new block types.
  • Type T blocks may follow either type A, C or T blocks, but no more than 60 type T blocks may be chained in a row.
  • Type T blocks contain a single coinbase transaction, and no other transactions.
  • Allowable coinbase transactions for type T blocks take as input the current block reward divided by 80.
  • The outputs of coinbase transactions from type T blocks are not spendable until followed by a type C block.
  • Type C blocks may only follow uninterrupted chains of 60 type T blocks.
  • Type C blocks contain a single coinbase transaction, and arbitrarily many other transactions (subject to the block size limit).
  • Allowable coinbase transactions for type C blocks take as input the current block reward divided by four, plus the sum of transaction fees from any included transactions.
  • Note that by construction, the total coinbase outputs of a run of 60 type T blocks and one type C block is 60/80+1/4 = 1 times the block reward, so there is no change to the total number of BTC being produced.
  • In counting blocks for difficulty adjustment, type T blocks are ignored. Thus the difficulty is adjusted after 2016 type A or C blocks since the last adjustment.
  • The new difficulty for type A blocks is adjusted as it is currently. ( new_difficulty = max( old_difficulty / 4, min( old_difficulty * 4, old_difficulty * ( two_weeks / time_since_last_adjustment ) ) ) )
  • The difficulty of a type T block (and hence a type C block) is set according to the formula new_difficulty = max( old_difficulty / 4, min( old_difficulty * 4, old_difficulty * ( two_weeks / time_since_last_adjustment ) * ( num_type_C_blocks / 100 ) ^ ( 1 / 2 ) ), where num_type_C_blocks is the number of type C blocks out of the last 2016 type A or type C. The implicit target here is 100 type C blocks per 2016, meaning a drop in ASIC miner profits of around 5%, which is hopefully not enough to overly annoy them. The slower adjustment to the number of type C blocks reflects the greater sampling variation in num_type_C_blocks and the fact that CPU power changes more slowly than ASIC power.
  • Note, that with roughly 5% of all profits going to CPU miners in normal times, type T block times should be around 30 seconds, and type C block times should be a bit less than 10 minutes. This is in line with my prior proposal, linked above.
  • Multiple low difficult "T" blocks are not equivalent to one higher difficulty block, because the variance of the time to produce N difficulty K blocks is lower than the variance of the time to produce one difficulty NK block. (Erlang vs exponential distributions.) The low variance of the time to produce 60 T blocks is what helps ensure that mining C blocks only starts after around 30 minutes, meaning that it only happens when ASIC miners have failed to produce A blocks for some reason.
  • The initial difficulty of producing type T and C blocks following the fork should be set so that in a hypothetical world in which (a) only one person CPU mined and (b) the price post-fork was equal to the price pre-fork, that one miner would exactly break even in expectation by CPU mining type T and C blocks on Amazon EC2, assuming that they obtained 5% of all block rewards. This is likely to be a substantial under-estimate of the true cost of CPU mining, due to people having access to zero (or at least lower) marginal cost CPU power, but an under-estimate is desirable to provide resilience post-fork.

Desirable properties

This proposal:
  • substantially reduces the variance of block times, increasing Bitcoin's use as a means of payment, and hence (probably) increasing its price,
  • encourages more people to run full nodes, due to the returns to CPU mining, increasing decentralization,
  • provides protection from sudden falls in ASIC hash rate, reducing tail risk of holding Bitcoin, and thus again (probably) increasing its price,
  • helps provide hash power post-fork, without driving away the existing miners and their hardware,
  • helps us deliver a block-size increase!
submitted by TomDHolden to btc [link] [comments]

[dev] A quick update, and a work in progress smart contracts guide

As much as it seems odd saying this with a sticky at the top of the subreddit, I do know some people don't read stickies, so - Dogecoin Core 1.10.0 IS OUT NOW and it's a huge security update you really do need. However, it does require reindexing the blocks on disk, and if you absolutely cannot do so (i.e. you run a service that can't handle the downtime right now), there's also Dogecoin Core 1.8.3 which has the most important parts back ported to it. If you use Dogecoin Core, you need to upgrade to one of these two, seriously.
On that note, we've got about 20-25% upgraded now; there's an (approximate) pie chart at that you can watch if you're really curious. I'm seeing 1.10.0 nodes come online then go offline - if you can keep a 1.10.0 node online, it would be much appreciated. I've got a few EC2 nodes online while the update rolls out, as well, to help support the numbers.
Enough of that, what's coming next? bitcoinj & Multidoge HD work is more or less just rolling along quietly waiting primarily on others at the moment. We're planning out Dogecoin Core 1.11, which will be based on Bitcoin Core 0.12. The big new thing in there will be OP_CHECKLOCKTIMEVERIFY (often shortened to CLTV), which finally lets us use smart contracts securely on the main Dogecoin block chain. It's going out to Bitcoin in their 0.11.2 release, however as we've just released a client, we're going to skip that one (or, it may be produced as a version we test but never release).
I promised everyone a guide to smart contracts, and... well it's gone a bit awry. What I thought would be around 6 pages is now at 9 pages and growing, so it's going to take a while to finish. However, it does cover the basics, and hopefully is enough to both let a general audience understand what smart contracts are, and a more technical audience understand how they can use smart contracts. The document so far is up at but there should be further revisions later.
Lastly, testnet - there's still a lot of old nodes on testnet, please update to 1.10.0, especially if you're mining (because someone's generating old v2 blocks and they're causing problems).
I'm away the weekend of the 29th, so that update is likely to be on the 30th instead, but I'll try to get something out that weekend. Might be quiet for a bit while the dust settles on the new release, anyway!
submitted by rnicoll to dogecoin [link] [comments]

EthMining on Cloud Services ?

Has anyone tried mining ether on a cloud service? Like where you can rent/buy computational power to do our work? (like amazon web services and such). I would like to know how the performance is on such services. Do they give good computational speed at low costs, enough to profit from ether mining?
I plan to shift to this because my PC has a 8GB GTX 850M card which is churning 1.6Mh/s on an average, which is pretty low and simply pointless to continue at this rate....
Would appreciate infact any comments about this! Thank you!
submitted by rsvishalakhil to EtherMining [link] [comments]

A practical way to put miners back to use and back Bitcoin with compute power.

I've heard time and again "Why doesn't Bitcoin just do [insert useful computation here] computation to secure the network!"
Why not [email protected]?
Why not cloud computing?
Bitcoin verification must satisfy these properties:
That last one is important because if [email protected] was done, and then a cure for cancer was found, the value of bitcoin would crash. [email protected] doesn't satisfy the 2nd property anyway.
However, we can put miners back to good (profitable) use, and back Bitcoin value with computational power!
Amazon currently offers a cloud compute service which charges for it's use by the hour.
The Bitcoin network of verifiers currently represents the largest distributed computing network in the world.
If we build some software to distribute relatively arbitrary GPGPU computation to miners, and we build a service that offers this computation power to clients, then we can sell Bitcoin mining compute power.
Would this reduce the security of the Bitcoin network? Yes and no.
The security of the Bitcoin network is about to drop due to mining rigs turning off as a result of non-profitability.
We could have these miners turn their rigs back on for the cloud compute service. This wouldn't represent a loss of Bitcoin network security, because we have already lost this security.
If this service was only to accept Bitcoins, then the value of Bitcoins would be backed partly by computation power -- much in the way that it is currently backed partly by drug trade.
Some of you have misinterpreted this as a proposal for a new currency that is secured by arbitrary computation. I explained above why that would not be possible.
This is a proposal for a service that pools mining power for sale -- payable in Bitcoins. Ex-miners would contribute to the service, and be paid daily for their service.
The idea is to back the bitcoin economy with a new merchant service, using powerful equipment that we are about to stop using anyway.
Edit #2:
People keep bringing up centralization. As if this somehow centralizes the entire currency. Suggesting that this needs to be decentralized is as silly as suggesting that your local barestaurant needs to be "decentralized" before it can accept Bitcoins.
This is a merchant service -- not a currency!
submitted by kdoto to Bitcoin [link] [comments]

TIFU by posting AWS keys to a public GitHub

Next thing I knew I had a slew of x-large windows machines mining bitcoin. Luckily Amazon is incredibly understanding and they plan to reimburse my company for $6,000 in fraudulent EC2 charges
submitted by cleaninterface to tifu [link] [comments]

Bitcoin mining on infrastructure downtime?

Right first of all, assume that full permissions from employers is received first of all. Please don't beat me up too much on this point.
Why not have infrastructure on its downtime mining for bitcoin?
Lets rule out the production environment. I administer a dozen powerful hypervisor machines that are used nearly exclusively in working hours for internal dev/qa environments. That's a lot of downtime. The only non-working hours tasks are backups, and some load test suites and that's only a fraction of the estate and well documented.
Would it not be unreasonable to have a slim *nix box power on many evenings, steal most of the the CPU per hypervisor and mine for all its worth? It'd obviously have to be configured like one would an EC2 spot instance, to easily be killed on contention, but also quick to spawn on idleness.
I also have a managed hosting staging environment (managed by me) that does nothing (not even backups) over night. I don't even have to worry about burning out the hardware there due to its SLA.
I know bitcoin's a huge bubble waiting to burst, but why not learn more about dynamic task scheduling and put nothing but CPU time (and some sysadmin time) in and get some cash back out? Thoughts?
submitted by Deku-shrub to sysadmin [link] [comments]

Is Mining Bitcoin Still Profitable in 2020? - YouTube How Much Can You Make Mining Bitcoin With 6X 1080 Ti ... Bitcoins minen in der Cloud 7 DAY$-24/HR$ - BITCOIN MINING EXPERIMENT - See How Much ... Registering for AWS Mining

And again, EC2 instances of the g2, g3, and p2 flavor can run you a pretty penny. We’ve also been forewarned that we’ll be competing with massive bitcoin mining farms that use ASIC miners that blow GPU mining out of the water. Is Ethereum mining on AWS profitable? Ethereum dual-mining profitability comparison (late June 2017). Bitcoin mining on the cloud without an ASIC miner does not yield any profit. Still, it’s a fun experiment. Step One: Get cloud hosting. ... Start a Spot Request for an EC2 instance. In part 1, we looked at mining Litecoins on CPUs rented from Amazon EC2. Now, let us see if we can get better performance by mining Litecoins using GPUs. Using CPUs, we were able to achieve an average hash rate of 144 KH/s using Amazon EC2's c3.8xlarge instances, that come with 32 CPUs. Recently,… Bitcoin Mining on Amazon EC2 This isn’t profitable; if you do this, stop after a while because you can get a better US Dollar to Bitcoin exchange rate basically anywhere else. You might think it’s a good idea to learn how to use Amazon EC2 at some point, especially with their big juicy GPU Computing instances. Covert Mining: Bitcoin and Amazon EC2. ... This is referred to as Bitcoin mining, and a similar process is used for obtaining other cryptocurrencies (i.e. Ethereum). In short, the greater the computing power, the faster the math gets done, and the higher the likelihood you or your group “win” the Bitcoin. ...

[index] [2858] [5194] [2229] [1304] [4569] [3929] [744] [3574] [4400] [4122]

Is Mining Bitcoin Still Profitable in 2020? - YouTube

In 2014, before Ethereum and altcoin mania, before ICOs and concerns about Tether and Facebook's Libra, Motherboard gained access to a massive and secretive ... SUBSCRIBE FOR MORE HOW MUCH - Nviddia GTX 1080 Ti - 6X GPU Mining Rig Case - WANT FREE STOCK FAST? CLICK LINK And CLICK "SIGN UP NOW"! 💲💲💲 ALL VIDEOS ARE ONLY REPRESENTATIVE OF MY OPINION. ONLY INVEST... This video goes over my 7 day 1 week Bitcoin Mining experiment. I let my computer Mine for Bitcoin for a week straight, to see how much money I could generat... How to register and purchase your mining and/or affiliate plans with AWS Mining If you need a sponsor, please visit our Facebook page at: https://www.faceboo...