# Video - Consensus Algorithms, Blockchain Technology and Bitcoin UCL

An academic lecture by Andreas M. Antonopoulos explaining the consensus algorithm, "Proof of Work", used by bitcoin and many other blockchains. This talk was presented in collaboration with the Department of Computer Science, at University College London. Andreas is a UCL alum.

## TRANSCRIPT

Andreas: So this is great. It's back to my roots. I was a student here in 1990. But I was actually born in London and I'm half British, which you won't hear in my accent anymore, Brooklyn changed that. The first day I arrived at UCL, I was telling my mom, "I'm going to be doing my orientation session at the University College Hospital, which is across the street, they have a big auditorium and that's where they're sending all the students." And she said, "Well, just be careful because last time you were there someone assaulted you with forceps." Turns out I was born at UCL and then ended up going to college at UCL and I had no idea. So this is this is really a fantastic honor for me and I'm truly very happy to be here and be doing this at UCL.

Unidentified Male: 50%.

Andreas: 50%, right. So half the hashes that are going to come out of this will have the first bit as zero. What are the chances the first 2-bits is zero? 25%. What are the chances the first 10-bits is zero, right? So it gets exponentially more difficult as you increase the number of zeroes in the front. So if I say I want you to generate a random number using SHA-256 by putting some inputs into it. And the criteria I have, rather bizarre criteria if you think about it, is I want you to produce a random number, but I want that random number to have a certain pattern to it. And the pattern is it has to have 0-bits at the beginning of the number. In numeric numerical terms, you can describe this as the number has to be smaller than a certain value, right. So effectively, if you say the first bit is zero, has to be zero, what you're saying is that the number has to be less than 255 minus 1, right. If you say the first 2-bits have to be zero, you're saying the number has to be less than 2 to the 254, etcetera, right. So what I'm saying is generate a seemingly random number that is smaller than a specific value. That value is called the target. Specifically, in Bitcoin we call it the difficulty target because the lower that value is the harder it is to find one of these numbers. Now how do you find one of these numbers? Let's take a typical example, SHA-256 is often used to fingerprint documents so you can create a fingerprint that allows you to verify that a document hasn't been modified. Typical example of that, let's say you signed a contract with someone, you take the PDF, you throw it through SHA-256, you get a fingerprint. Now if any PDF that matches, that fingerprint, that produces that same fingerprint is the exact same PDF that you originally had. And you could verify that. If you want to download software from a website and that software is extremely sensitive, security sensitive code, you'll often see at the bottom of the website, it will say, "Verify that the software package you download has this fingerprint." And you can know that not a single bit in that software has changed. This is due to a characteristic of SHA-256,which is that the output changes dramatically even if you change just one bit. So this is a cascade effect, you change one bit in the input. What you get out is not just one bit different. It is in a completely different part of the 256-bit space, so you never know where you're going to land. So let's take a simple example, I take this phrase, hello! Anybody who has a laptop can throw SHA-256 sum or SHA? Any Mac, any UNIX based system, any windows-based system has that function in it, type SHA and it's going to give you a hash. You tell it to use the 256-bit algorithm, you type in capital H-E-L-L-O! It's going to give you a specific fingerprint. That fingerprint is specific to this phrase, you change the exclamation mark, you add a space, you change the first letter to a lowercase H completely different fingerprint, right. But as long as you enter the exact same phrase, the result will be a specific fingerprint. Anybody did that on their laptop? Okay. What's the result?

Unidentified Male: All this?

Andreas: Just give me the first few digits.

Unidentified Male: a8d19153 (Inaudible0:11:28).

Andreas: Anyone in the world can take the phrase, hello! Put it into SHA-256 and they will get a8d19153. So, how do I change that? Well, if I wanted to produce different fingerprints from this, I could introduce some change to the input. How about adding a number? So I can take the same string and instead I add a number to it, hello, zero. And please help me out here if you don't mind.

Unidentified Male: Completely different 0E0978C.

Andreas: 0E0978C. It's a 64 characters long. So completely different output and I can keep going, hello, one. No need to do it, but completely different outputs, right. Each time I do this I'm going to get a different result. Now you remember before I said, I want my target to be, let's say the target is that the first 4-bits are zero. Well, this one fulfills it. The first hexadecimal digit is zero, which means that this particular input produces a value less than the target. That was relatively easy to do, right, because once again the probability of finding a hash where the first hexadecimal digit is zero is not that low. I can just run a few iterations and I'm going to get at least one value that's less than zero. Now if I gave you, if you gave me the string, hello, and you said, "Find me a specific random number" or any number that you add to hello and when you add it, it produces a hash that starts with zeroes. How do I find that number? I have to brute-force it. I have to try every possible number. So I would start hello zero, hello one, two, three, four, five, six, seven, eight, and just keep going in a loop as fast as I can in order to produce hashes. And at some point, a hash would pop out that would meet my difficulty target. And as soon as I have that hash, I can show it to you and I can do two things: I can show you a hash; and I can show you the number. So I can say, for example, hello 39,137 produces a hash that has two zeroes in the beginning, I'm just guessing, right. If I do that, can you verify? Well, of course, you could take hello 39,137, plug it into a hash, come out with a fingerprint. And if the first few digits is zero, you're like, "Oh, that is correct." What have I just proved? What I've proved is that I did several tens of thousands of SHA operations to find that. There is no way I can produce that result without doing the work. There is no way I can produce that result without searching through the space of fingerprints in a brute-force manner. And based on the difficulty, 1 bits 0, 50%, 2 bits 0, 25%, 3 bits 0, 12%, 4 bits 0, 6 %, etcetera, etcetera, etcetera. You can estimate statistically, simply from the pattern approximately how many operations I need to do on average before I find it. All right. Who's got access to a blockchain right now and can look up the latest block? Anyone? Look up the block hash of the latest block.

Unidentified Male: What if you do and I then start?

Andreas: All right. Can you read me out the beginning?

Unidentified Male: 10C300000000000000000.15DE165.

Unidentified Male: Because miner moves forward.

Andreas: Sorry.

Unidentified Male: None of the other miners would agree to trust that.

Andreas: No, none of the other miners would agree to trust that. So imagine a scenario, where the miner gets greedy. And instead of writing a check to themselves for 25 Bitcoin, they write a check for 26 Bitcoin. They construct the candidate block. They fill out with transactions. They put the first transaction to pay themselves 26 Bitcoin. And now they have the header, and now they have to search for this number, the nonce. And they do all of this work and they produce a hash. They find the proof-of-work. They then disseminate that block to all of the other miners. What do the other miners do? They validate the block consensus rules and they look through. And one of the consensus rules is you can only pay yourself reward at the correct rate based on what block we're at. So if it's 2009 to 2012 year, the first 800,000 blocks, 50. If it's now until approximately August 2016, 25. At a very specific moment on a specific block that equation is going to produce a result of twelve and a half, shortly after approximately August 2016 when enough blocks have been mined. Sorry. It's after every 200,000 blocks. So after 400,000 blocks have been mined on the very next block, everyone will expect to see a reward of twelve and a half. And if you write yourself a check for 25 at that point, your block is going to be rejected. Okay. Yes.

Unidentified Male: Who sets the validation rules?

Andreas: Who sets the validation rules? This is a really good question. The validation rules are defined by the software, the reference implementation. So if you want to know what the current validation rules are, you read the software code for the Bitcoin core reference implementation. So it's a C++ program that contains within it, functions that do things like its block valid is transaction valid, you know. And these functions evaluate things based on a simple set of rules. Now you might ask, "Where are these rules documented?" And the answer is "Nowhere." The rules are whatever the core implementation says the rules are. So the rules are only documented in C++. In fact, this is one of the tricky parts of consensus. The rules are whatever the core implementation does, including the bugs, including every bug ever found since 2009. If you want to write a competing implementation you have to simulate every bug that was found in the code since 2009 and process the blocks in exactly the same way because you have to reprocess them from the beginning until today, which means that you have to fail where Bitcoin core failed and succeed where Bitcoin core succeeded, bugs and all. Yes.

Unidentified Male: Whose definition of C++?

Andreas: Whose definition of C++? Well, it's --

Unidentified Male: It keeps changing.

Andreas: Yes. So with that software code comes a long list of very specific dependencies. Dependencies on underlying database code C++ versions, boost and various other libraries that are used up to recently open SSL libraries. Now they've changed that a bit. So, yes, there is a whole edifice of dependencies that constitutes the reference code. Yes.

Unidentified Male: (Inaudible0:28:37) a consensus (Inaudible0:28:39) change it or a centralized (Inaudible0:28:40) change it?

Unidentified Male: Start on the next one.

Unidentified Male: Would it be possible to reach consensus earlier by (Inaudible0:46:00) then you wouldn't have to wait for the next block (Inaudible0:46:13)?

Andreas: Well, the difference in proof-of-work will be very, very miner.

Unidentified Male: Yes.

Andreas: Right.

Unidentified Male: But it's still (Inaudible0:46:23).

Andreas: There are other protocols for consensus.

Unidentified Male: Okay.

Andreas: The problem with that is that it can cause various weird effects in the network. So this model of eventual convergence after 10 minutes works, right. There is another protocol called Gost, G-O-S-T, that does some interesting things that allow, for example, children proof-of-work to still be counted because you still did the work and to get maybe a partial reward for that. That's not Bitcoin, right. Bitcoin works with this very simplistic algorithm. There are many other competing consensus mechanisms.

Unidentified Male: So if I'm a miner who has proved the block (Inaudible0:47:13) --

Andreas: Yes.

Unidentified Male: -- use the 25 Bitcoins?

Andreas: Well, so here's the question, each one of these has a check that pays 25 Bitcoins to someone. Checks are only valid if they're on the longest blockchain. So this check is on the longest blockchain. The entire network knows that this check is no longer on the longest blockchain. Essentially, this transaction never happened, you disappear from history. The truth is the longest chain. The other chain never happened. Now here's where our critical consideration comes in. When you earn a reward check for mining a block, you can't spend that for 100 blocks. Why? Because for one block, history is fickle, right. After 100 blocks, if your transaction is still in the chain, it is history, right. Think about the Bitcoin blockchain as geological strata. You're drilling a core sample in the ice in Antarctica, the top ten centimeters are slush, they come, they go, they melt, the wind blows, stuff around, you can't tell anything. You go three meters down, you're looking at 100 years of history and that layer hasn't moved in 100 years. You go 300 meters down and you're looking at the Cretaceous era and that millimeter thin layer hasn't moved in millions of years at all. And the way that happens is because layers get deposited on top and compacted. And the deeper they are, the harder it is to change them. Bitcoins consensus algorithm creates a geological history if you like. At the top, the winds blowing and things are very fickle. You go 144 blocks down which is one-day old, the probability of a block changing after 144 confirmations is vanishingly small. But the algorithm allows it. You can change block two. All you have to do is produce a competing longest chain that has more proof-of-work cumulatively than 366,000 blocks until today in ten minutes because you have to do that before the other chain gets one block longer, right. And this is how the cumulative work. Yes.

Unidentified Male: (Inaudible0:50:10) alternate change, though, they have discard them (Inaudible0:50:14)?

Andreas: They discard them pretty quickly because you can always retrieve the alternate block. Because if the node is publishing a longest chain, it also has the history behind it, so you can always retrieve that. I'm not sure, I mean, that's an implementation detail. But it's a good question. All right. Yes.

Unidentified Male: But it means that for achieving more Bitcoins, it's also necessary that you are near the right part of the network because if your mining a cheaper block and you spread it, if you are near, like, low computing --

Andreas: Yes.

Unidentified Male: -- (Inaudible0:50:57) you will not achieve that.

Andreas: So this is a very, very important consideration.

Unidentified Male: Like that, who is richer (Inaudible0:51:06)?

Andreas: Who is bandwidth rich? There are two important considerations in mining: how cheap you can buy electricity, and how low latency networking you can achieve. If you have cheap electricity you can mine as close as possible. Keep in mind that the efficiency of mining equipment is bound on the upper side by Moore's law. We're already seeing 14 nanometer chip fabrication. Bitcoin mining is approaching the edge of Moore's law faster than desktop computing. Why? Because there's a three-billion-dollar economic incentive behind it, which doesn't exist in desktop computing anymore. Bitcoin mining is now driving the development of silicon fabrication, which is shocking. But what that means is that you cannot squeeze more efficiencies out of silicon. Now it becomes a matter of how densely you can pack that silicon in a chip, how densely you can pack the chips on a board, how densely you can pack the boards in a rack, how quickly you can suck heat out of that, and how quickly you can push gigawatts or kilowatts or megawatts of energy into that rack. It's a data center game. And then your economic efficiency depends on the cost of your inputs electricity, the cost of your operations and ability to maintain the hardware, and your ability to propagate these blocks faster on the network. Latency is enormously important, which means that at the moment most mining has migrated to China. And the reason for that is because subsidized coal-fired electrical power in China is I think the ironic term would be dirt-cheap. And so as a result, it leads that economic concentration there. However, China has bandwidth issues and latency is a big problem, which means that as the block size increases it puts the Chinese miners at a disadvantage. If you have a one megabyte block and you're trying to propagate it to eight nodes, it takes a certain amount of time. If you take that one megabyte block and you increase the size limit to eight megabytes, it takes you eight times as long. And while you're propagating a block, someone else is beating you to it. All right. How often does a Fork happen? Approximately, every day. Once a day on average. Two-block Fork, maybe once a week, maybe once a month. If you start seeing a three-block Fork, something really weird is happening on the Bitcoin network. Why? Because imagine the two competing sides created two blocks, then the two competing sides of the network started building on top. And again by coincidence, two blocks were discovered sufficiently far on the network to propagates two equal parts of the network and sufficiently close together in time to not be able to overwhelm each other. And then everybody starts building on top. And again by coincidence, two blocks are found almost simultaneously at opposing sides of the network. The probability of that happening once, okay; twice, rare; three times, exceedingly rare, etcetera, etcetera. It's an exponential function. And that is the basis of the consensus algorithm. Do we have a crowd?

Unidentified Male: There is nobody, we can keep go on.

Andreas: We'll keep --

Unidentified Male: Somebody might be (Inaudible0:54:33)

Unidentified Male: So 51% of hash rate (Inaudible1:06:49)

Andreas: No.

Unidentified Male: No. Okay.

Andreas: Theoretically, you could. The protocol allows it.

Unidentified Male: Yeah.

Andreas: You would have to not only sustain 51% hash rate, you would then have to do 366,000 blocks worth of proof-of-work in 10 minutes.

Unidentified Male: Okay.

Andreas: So more likely you can change one or two blocks in the most recent history.

Unidentified Male: Yes.

Andreas: Maybe three.

Unidentified Male: But the (Inaudible1:07:18)

Andreas: No. Because every node should be able to fully validate from the Genesis block up no matter what you present to it based on the same rules that were in existence at the time.

Unidentified Male: Right.

Andreas: If you present a completely valid alternate history with sufficient proof-of-work to a Bitcoin node, it should be able to validate it all the way to the present from the Genesis block. In fact, every node when it starts only knows the Genesis block. The first thing it has to do is synchronize with a network. And it does that by independently and painstakingly verifying everything from Genesis block to today. It reconstructs the entire chain independently. Every node does this. It takes four to five days.

Unidentified Male: Thank you.

Andreas: The blockchain is about 40 gig at the moment, I think, depending on whether you're indexing all of the transactions or not. And it's growing fast. So the actual synchronization takes quite a bit of bandwidths and quite a bit of time. Okay, questions? Yes.

Unidentified Male: The many variations of Bitcoins are like (Inaudible1:08:29) all these. Do they use the same mechanics as Bitcoin?

Andreas: Not all. So the vast majority of altcoins operate using what we would call Nakamoto consensus, Nakamoto consensus being longest chain proof-of-work as determined by usually a SHA-256 algorithm. Some use a SHA-3 algorithm or a script algorithm or different forms of hashing algorithms, but they still implement Nakamoto consensus in terms of longest chain proof-of-work. But there are altcoins that use other forms of consensus, modified consensus with taking into consideration orphan child's, for example, which is Gost, what I described before. We have some experimental implementations of that. There are consensus mechanisms that instead of proof-of-work are based on proof of stake or delegated proof of stake. And we're seeing all kinds of new consensus algorithms being dreamt up. How many of those can scale to a global level of security that is resistant to global attacks? So far, one, Bitcoin. But that doesn't mean another one can't scale. What's difficult however is that if you try to scale a consensus algorithm today, you have to reach scale before you are attacked at scale. You have to build an industrial infrastructure of hashing or mining or a user adoption base or an economic base that is big enough to resist attack before you are attacked. Bitcoin did this by everybody ignoring it for a couple of years because they didn't think it was important. And by the time everybody noticed and thought, "Okay, maybe this is important and worth attacking," it was already strong enough that it couldn't be attacked. And then the strength of the network has outpaced the adoption and demand and attacks to make it extremely resilient to attack. The problem is you can't do that again because if you actually have a really innovative consensus algorithm and people think that's going to be valuable and they think it's going to be valuable enough to join the consensus algorithm and mind for it, they're also going to think it's valuable enough to attack. And there's no flying under the radar anymore. This is not just an algorithm. This is now a completely new scientific discipline. Consensus algorithms will be entire computer science curricula in the future. This is a completely new branch of distributed computing. It is extremely important. And it is now six years old. This is the birth of a new scientific discipline. And we've gone from one scientific paper, the Satoshi Nakamoto paper. Last year, about 140 papers, peer reviewed academic journal papers were written on consensus algorithms. This is now a thriving scientific discipline with dozens and dozens of researchers around the world working on this. Yes.

Unidentified Male: Do you worry consensus having (Inaudible1:11:35) transactions (Inaudible1:11:37) software is updated. Do you worried (Inaudible1:11:48) is that centralize and decentralize? Do you worry about that?

Andreas: So consensus works in waves. And this is an important concept to understand. That's what I would call, process consensus, which is a process of debate and proposal that happens among the development community. It starts with Bitcoin improvement proposals, discussions on GitHub, Bitcointalk, the Bitcoin developers' mailing lists in various other places, where people suggest changes or slight modifications to the rule or major modifications to the rules. That discussion, the reason for that discussion is to enable smooth software transitions in runtime consensus. So what you do is you gather debate and try to reach process consensus, which means that you have enough people in the development community agreeing with you. You then do a lot of testing. In order to validate the software, you provide a reference implementation that implements the change, you demonstrate that reference implementation on the test net which is a parallel Bitcoin blockchain, you run a battery of tests, the other developers poke holes in it, find bugs, suggest improvements. And at some point, you reach consensus. Then that is implemented in the Bitcoin core reference implementation. Now you've reached reference consensus, which means that it is introduced as a release in the software. In order to do that, you have to get a broader set of consensus. Now that software is released. In order for that software to actually go into the network, people have to upgrade. So now you have to have consensus among the constituencies. And people think that miners are the only constituency. But they're actually five consensus constituencies in Bitcoin. The software developers who are making the reference implementation, the miners who are implementing the runtime consensus for mining blocks, but also the exchanges, each exchange that exchanges Bitcoin into other forms of currencies is running nodes that validates transactions and they choose which version of the software they're running. The wallets, each wallets company, or wallet software, out there creates transactions that must be validated by consensus rules. And if they're doing centralized wallet processing, they're also running nodes that must validate transactions based on consensus rules. And finally, merchants and merchants processing, the economic engine of Bitcoin. Merchants either directly or through processors are running nodes to validate transactions. And they're doing the strictest validation possible because they're the ones who give out a plasma TV for this mythical magic internet money. And so if they don't validate the transaction correctly, they are out one TV and don't have Bitcoin to show for it. Now what happens if the miners go off on their own and the merchants' exchanges and wallets choose a different version? Well, the miners create Bitcoin and they mine it. And a hundred transactions later, they try to spend that Bitcoin. Only they can't spend the Bitcoin because they can't buy anything it because the merchants are on a different chain and therefore their transaction never happened according to the merchants. They can't convert it into currency to pay for their electricity because the exchanges are on a different change and therefore the transaction never happened according to them. And by the way, all this time they've been mining empty blocks because the wallets are on a different chain and they're producing transactions based on different consensus rules. It's not so easy to shift consensus in Bitcoin. In fact, what we're seeing over time is that it's getting harder and harder to modify consensus rules. This is a process that we see in distributed systems and protocols. I call it ossification. I don't know if that's an industry standard term. But the idea is that after a while as the protocol gets embedded in more software systems, more developers learn how to use it. It gets embedded in hardware. It gets embedded in systems that are not updated often enough or maintained often enough. Then it becomes harder and harder to change. A great example of that is IPv4. IPv4 got so embedded that we've now spent almost 20 years trying to upgrade it to IPv6 and it is resisting its own successor. It's become incredibly difficult to upgrade IPv4. And the reason for that is because you have it embedded in fridges and light switches, and wireless access points, and things that don't have interphrases and can't be upgraded or don't have enough memory and can't be upgraded. IPv4became ossified. The best protocol doesn't win. The protocol that's good enough and achieves network scale first wins. So the consensus rules of Bitcoin today are likely to be able to absorb change for a couple more years at the core protocol level. After that, most of the innovation has to move to protocol layers above just like most of the innovation on the Internet moves from IP to TCP to HTTP and to other protocols above HTTP because each of the layers below gradually became ossified and could not be changed dramatically. You can't go and change TCP/IP today. It's simply impossible. All right. Let me take one more question and then we're going to wrap because we're running late. Yes.

Unidentified Male: How many transactions can you actually get done in 10 minutes?

Andreas: So how many transactions can you get done in 10 minutes? That is a capacity limitation, which is artificially constrained by the maximum size of a block. A block today, it can be a maximum size of one megabyte. With a maximum size of one megabyte, it can fit a few thousand transactions depending on the size of each transaction which is variable. The average processing capacity is between three transactions per second to seven transactions per second with the current constraints. These are artificial constraints. So there's a big debate going on in Bitcoin at the moment as to how and when to raise the capacity limit. For the time being, blocks are coming in full about 60 to 75% of the time, meaning that there's still excess capacity to fill with low fee transactions. And in most transactions, the vast majority gets confirmed within the next available block on a best-effort basis. Occasionally, when the network is under stress it may take two or three blocks for a transaction to be processed. Transactions don't have an expiration date. As long as the network knows about them, they will be eventually confirmed. They are valid forever. And so therefore, you can keep retransmitting a transaction until there's sufficient capacity in the system to absorb it. It's fairly resilient in that way. Now the proposals of the moments are to increase the block size capacity by January 2016 to eight megabytes, an eightfold increase, followed by an increase twice over every four years. So 8, 16, 32 every four years. And to keep increasing the size in 2032, that gets us to a twenty gigabyte block, which if we keep approximately the same size of transaction, means 20,000-time increase in the capacity of transactions. So you're looking approximately 100,000 transactions per second. 100,000 transactions per second is the global capacity of the Visa Network. So depending on whether you need to do all of the transactions on the blockchain because there's a big argument that you can actually do a lot of the transactions off the blockchain. You don't need to put every transaction on the blockchain. There are many circumstances where you can do incremental transactions between two parties. This technology is called payment channels. And then do eventual settlement of the net difference between all of the sum of transactions as a single transaction on the blockchain. What that does is it takes the trust capability of the network and it provides it as an attributes to a layer above that can leverage it, but without flooding the blockchain with transactions. So there's the base amount of transactions you can actually record on the blockchain, but each one of those could represent hundreds of transactions that happen off blockchain in between two parties. So the actual capacity may be a lot higher. It's a bit like in monetary terms we talk about M0, which is the base amount of physical currency that is in existence, right. And the amount of cash that exists in the economy is less than 3% of the total amount of money in the economy. But because that cash gets used again and again and again, it actually enables for a lot more economic activity, velocity for each unit of currency. In blockchain, you can think of the base transaction rate as the capacity of the base mechanism. But with overlay networks, you can magnify that and increase the velocity of each transaction. Does that answer your question?

Unidentified Male: So essentially, you're not expecting to (inaudible1:21:33) in each block that will be dealt with (Inaudible1:21:31).

Andreas: Well, at the moment you could if you wanted to, just about.

Unidentified Male: But if you had more -- I'm thinking in terms of value.

Andreas: You have to predict. Is Bitcoin the currency with which you buy aircraft carriers? Or, is Bitcoin the currency with which every one, two, three billion people buy a cup of coffee every single day, right? And then the secondary question is, if Bitcoin is the currency with which billions of people buy a cup of coffee every day, do all of those transactions happen on the core blockchain, which means that we need to massively increase capacity? Or, do many of them happen on overlay networks with eventual settlement which still preserves the decentralized nature. It still preserves the transactional trust in the thing, but it doesn't flood the blockchain. And there's competing schools of thought on this. So we necessarily must scale up capacity. And the question is not whether we will use technique A, B or C, but more how quickly can we use A and B and C in parallel to increase capacity. We're seeing overlay networks and block size increases, and optimization of block propagation, and pruning of transactions of the blockchain to reduce the footprint on disk, and optimizations on validation and processing. All of these things are happening at the same time. So it's a bit of a philosophical issue at the moment. We don't know where Bitcoin is going as a transaction processing environment yet.

Unidentified Male: So, you're suggesting (Inaudible1:23:15) the whole thing it would be a simple (Inaudible1:23:17)?

Andreas: Not necessarily. I think that's more likely, but it could very well scale to support hundreds of thousands of transactions per second. Because one of the other things that is the context in which all of this is happening, of course, is Moore's Law. And so if bandwidths and storage and compute capacity continue to increase at Moore's Law, then in the next 20 years as Bitcoin reaches mainstream adoption, you could actually support billions of users with quadrillions of transactions. You just have to start moving a lot more data on a lot beefier computers. And so we don't know exactly where it's going. We'll see. This is a software engineering problem. This is why it's exciting. Hopefully, that research will be done here at UCL. Thank you so much for (Inaudible1:24:16).

(END OF AUDIO)

Written by Andreas M. Antonopoulos on January 31, 2016.