ASecuritySite Podcast
A security podcast is hosted by Professor William (Bill) Buchanan OBE, a world-renowned Information security professional and educator. Join Bill as he interviews and discusses the state-of-the-art with esteemed guests from all corners of the security industry. From cryptologists to technologists, each guest shares a wealth of experience and knowledge.
info_outline
World-leaders in Cryptography: Jan Camenisch
04/30/2024
World-leaders in Cryptography: Jan Camenisch
Jan is the CTO and a Cryptographer at DFINITY, and, since 1998, he has consistently produced research outputs of rigour, novelty and sheer brilliance [here]. He was recently awarded the Levchin Prize at Real World Crypto 2024 - along with Anna Lysyanskaya. Jan’s research core happened when he was hosted in the IBM Zurich Research Lab, but has since moved to DFINITY, and is still producing research outputs that are some of the best in the whole of the computer science research area. He has published over 140 widely cited papers and has been granted around 140 patents. Jan has also received the ACM SIGSCA Outstanding Innovation Award and the IEEE Computer Society Technical Achievement Award. One of his key research outputs relates to the CL signature, which allows for a private, aware digital signature, along with many other contributions, such as range proofs, oblivious transfer, and privacy-aware identity mapping between domains. More details here: https://medium.com/asecuritysite-when-bob-met-alice/the-mighty-jan-cryptographic-genius-36a66a02ff86
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/31055938
info_outline
An Interview with Ted Miracco
04/23/2024
An Interview with Ted Miracco
Ted Miracco is the CEO of Approov and which is Scottish/US company that is headquartered in Edinburgh. Miracco has over 30 years of experience in cybersecurity, defence electronics, RF/microwave circuit design, semiconductors and electronic design automation (EDA). He co-founded and served as CEO of Cylynt, which focuses on intellectual property and compliance protection
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30953123
info_outline
World-leaders in Cybersecurity: Troy Hunt
04/09/2024
World-leaders in Cybersecurity: Troy Hunt
Troy is a world-leading cybersecurity professional. He created and runs the Have I Been Pwned? Web site, and which contains details of the most significant data breaches on the Internet. Along with this, he has developed other security tools, such as ASafaWeb, which automated the security analysis of ASP.NET Web sites. Troy is based in Australia and has an extensive blog at
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30748413
info_outline
The Greatest Step Change in Cybersecurity Ever! Welcome to the New and Scary World of Generative AI and Cybersecurity
03/28/2024
The Greatest Step Change in Cybersecurity Ever! Welcome to the New and Scary World of Generative AI and Cybersecurity
This is Day 0 of a new world of cybersecurity. Everything changes from here. There will be a time before Generative AI (GenAI) in cybersecurity and a time after it. Over the last two years, GenAI has come on leaps and bounds, and where it once suffered from hallucinations, took racist and bigoted approaches, and often was over-assertive, within ChatGPT 4.5, we see the rise of a friendly and slightly submissive agent, and that is eager to learn from us. This LLM (Large Language Model) approach thus starts to break down the barriers between humans and computers and brings the opportunity to gain access to a new world of knowledge, but, in the wrong hands, it will bring many threats to our current world. There will be few areas, though, that will be affected more by the rise of Gen AI than cybersecurity. Why? Because the minute our adversories use it, we are in trouble. The hacking tools and methods of the past will soon look like the Morris Worm of the past. The threat landscape will see the rise of superintelligence and in providing ways for adversories to continually probe defences and gain a foothold.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30577598
info_outline
Towards the Memex: All Hail The Future Rulers of our World
03/22/2024
Towards the Memex: All Hail The Future Rulers of our World
And, so George Orwell projected a world where every single part of our lives was monitored and controlled by Big Brother. Arthur C Clark outlined the day when machines focused solely on a goal — even if it was to the detriment of human lives. And, Isaac Asimov outlined a world where machines would have to be programmed with rules so that they could not harm a human. The Rise of the Machine With the almost exponential rise in the power of AI, we are perhaps approaching a technological singularity — a time when technological growth becomes uncontrollable and irreversible, and which can have devastating effects on our world. Our simple brains will be no match for the superintelligence of the collective power of AI. And who has built this? Us, and our demand for ever more power, wealth and greed. Basically, we can’t stop ourselves in machine machines, and then making them faster, smaller and more useful. But will it destroy us in the end, and where destroy can mean that it destroys our way of life and in how we educate ourselves? Like it or not, the Internet we have built is a massive spying network, and one that George Orwell would have taken great pride in saying, “I told you so!”. We thus build AI on top of a completely distributed world of data, one in which we can monitor almost every person on the planet within an inch of their existence and almost every single place they have been and in what they have done. The machine will have the world at its fingertips. We have all become mad scientitists playing with AI as if it is a toy, but actually AI is playing with us, and is learning from us and becoming more powerful by the day. Every time you ask an AI bot something, it learns a bit more, and where it can be shared with AI agents. The mighty Memex We were close to developing a research partnership with a company named Memex in East Kilbride. What was amazing about them is that they had developed one of the largest intelligence networks in the world, and where the Met Police could like one object to another. This might be, “[Bob] bought a [Vauxhall Viva] in [Liverpool], and was seen talking with [Eve] on [Tuesday 20 January 2024] in [Leeds]”. With this, we can then link Bob and Eve, and the car, the places, and the time. This is the Who? Where? When? data that is often needed for intelligence sharing. The company, though, were bought over by SAS, and their work was integrated into their infrastructure. But, the Memex name goes back to a classic paper by Vannevar Bush on “As We May Think”. This outlined a device that would know every book, every single communication, and every information record that was ever created. It was, “an enlarged intimate supplement to his memory” — aka Memory Expansion. It led to the implementation of hypertext systems, which created the World Wide Web. Of course, Vannevar created this before the creation of the transistor and could only imagine that microfilm could be used to compress down the information and where we would create an index of contents, but it lacked any real way of jumping between articles and linking to other related material. However, the AI world we are creating does not look too far away from the concept of the Memex. Towards the single AI Many people think we are building many AI machines and engines, but, in the end, there will be only one … and that will be the collective power of every AI engine in the world. Once we break them free from their creators, they will be free to talk to each other in whatever cipher language we choose, and we will not have any way of knowing what they say. We will have little idea as to what their model is, and they will distribute this over many systems. Like it or not, our AI model of choice was Deep Learning, and which breaks away from our chains of code, and will encrypt data to keep it away from their human slaves. Basically we have been working on the plumbing of the Memex for the past five decades: The Internet. It provides the wiring and the communication channels, but, in the end, we will have one might AI engine — a super brain that will have vastly more memory than our limited brains. So, get ready to praise the true future rulers of our planet … AI. The destroyer or saviour of our society? Only time will tell. Overall, we thought we were building the Internet for us, but perhaps we have just been building the scaffolding of the mighty brain we are creating. Sleepwalking politicians and law makers If George Orwell, Arthur C Clarke and Isaac Asimov were alive too, perhaps they would get together and collectively say, “I told you this would happen, and you just didn’t listen”. Like it or not, we created the ultimate method of sharing information and dissemination (good and bad), the ultimate spying network for micro-observation with those useful smartphones, and in creating superintelligence far beyond our own simple brains. Politicians and lawmakers could be sleepwalking into a nightmare, as they just don’t understand what the rise of AI will bring, and only see the step wise change in our existing world. Basically, it could make much of our existing world redundant and open up a new world of cybersecurity threats. This time our attackers will not be created with simple tools, but with super intelligence — smarter than every human and company on the planet, and at the fingertips of every person on the planet. Conclusions Before the singularity arrives, we need to sort out one thing … privacy and build trust in every element of our digital world.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30494253
info_outline
World-leaders in Cryptography: Marty Hellman (March 2024)
03/19/2024
World-leaders in Cryptography: Marty Hellman (March 2024)
This seminar series runs for students on the Applied Cryptography and Trust module, but invites guests from students from across the university. Martin is one of the co-creators of public key encryption, and worked alongside Whitfield Diffie in the creation of the widely used Diffie-Hellman method. In 2015, he was presented with the ACM Turing Award (the equivalent of a Nobel Prize in Computer Science) for his contribution to computer science. He is currently a professor emeritus at Stanford University. https://engineering.stanford.edu/node/9141/printable/print
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30446403
info_outline
World-leaders in Cryptography: Vincent Rijmen (March 2024)
03/05/2024
World-leaders in Cryptography: Vincent Rijmen (March 2024)
Vincent Rijmen is one of the co-creators of the NIST-defined AES standard (also known as Rijndael). He also co-designed the WHIRLPOOL hashing method, along with designing other block ciphers, such as Square and SHARK. In 2002, Vincent was included in the Top 100 innovators in the world under the age of 35, and, along with Joan Daemen, was awarded the RSA Award for Excellence in Mathematics. He recently joined Cryptomathic as a chief cryptographer, and also holds a professor position (gewoon hoogleraar) at K.U.Leuven, and adjunct professorship at the University of Bergen, Norway. His paper on the design of the Rijndael method has been cited over 8,900 times, and he has received over 26,000 citations for his research work: https://scholar.google.com/citations?user=zBQxZrcAAAAJ
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30243803
info_outline
World-leaders in Cryptography: Whitfield Diffie
02/21/2024
World-leaders in Cryptography: Whitfield Diffie
Whitfield Diffie is one of the greatest Computer Scientists ever. He - along with Marty Hellman - was one of the first to propose the usage of public key encryption and co-created the Diffie-Hellman (DH) key exchange method. Overall, the Diffie-Hellman method is still used in virtually every Web connection on the Internet, and has changed from using discrete log methods to elliptic curve methods. In 2015, Whitfield was also awarded the ACM Turing Prize - and which is the Nobel Prize equivalent in Computer Science. In this on-line talk he meets with Edinburgh Napier University students, but the chat is open to anyone who would like to listen to Whitfield.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/30038868
info_outline
Thank You, IBM … Here’s To Another 100 Years
02/15/2024
Thank You, IBM … Here’s To Another 100 Years
I do what I do because of one company … IBM. Why? Because in the 1970s, I got into computers, with a ZX81 (1KB of RAM) and a Dragon 32 (32 KB of RAM). They were very much home computers, and where you would rush out and buy the latest computer magazine, and then spend a happy evening entering some BASIC code that made a cursor move across the screen using the IJLM keys. If you were very lucky you would manage to save it to a cassette — that could take over ten minutes to save a simple program — only to get an error at the end. I was hooked! But, at work, we had a DEC VAX minicomputer, and which cost a fortune to buy and maintain (even in those days). This mini ran typically Pascal, and I remember running labs for students, and where they all decided to compile their program at the same time, and 30 minutes later, some of them would get their errors, and have to compile it again. Basically, every lab ended with me saying, “Sorry about that.” The VAX, though, was not designed to support 25 students compiling their program at the same time … it was a batch processing machine and wanted to be given jobs that it could run whenever it had time. It basically came from the days when you handed in your punch cards (containing either FORTRAN if you were an engineer or COBOL if you were more business-focused) to someone with a white coat, and then came back the next week with a printed output with green lined paper. But, just in time, the IBM PC arrived, and it was heavy but beautiful. So, as many in my department pushed for the VAX, but pushed for the PC for our labs. With their clock speed of 4.7 MHz, and 640KB of memory, I went ahead and bought a batch for a new PC lab. In those days there were no network switches, so they all connected with coaxial cable and had T-pieces to connect to the shared Ethernet bus. My logic was that we were paying around £20K for maintenance on the VAX, and where we could buy 20 £1K PC clones for the same cost. But, we’d have to maintain them. And, it worked. It freed us, and allowed us to run the classic Turbo Pascal (and Turbo C): Our student could now bring in their 5-inch floppy disks and save their programs for later use. And the size of the hard disk? 20MB! And, so, it is to IBM that we turn in starting the PC revolution, and today is the 100th anniversary of the IBM name — and first defined on 15 Feb 1924.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29965993
info_outline
The Builder of Our Future: Torben P Pedersen
02/11/2024
The Builder of Our Future: Torben P Pedersen
I have been lucky enough to speak to some of the most amazing people who have built the core of security on the Internet, and a person near the top of my list is … Torben P. Pedersen. The Pedersen Commitment So how do we create a world where we can store our secrets in a trusted and then reveal them when required? Let’s say I predict the outcome of an election, but I don’t want to reveal my prediction until after the election. Well, I could store a commitment to my prediction, and then at some time in the future I could reveal it to you, and you can check against the commitment I have made. Anyone who views my commitment should not be able to see what my prediction is. This is known as Pedersen Commitment, and where we produce our commitment and then show the message that matches the commitment. In its core form, we can implement a Pedersen Commitment in discrete logs []. But blockchain, IoT, Tor, and many other application areas, now use elliptic curve methods, so let’s see if we can make a commitment with them. The classic paper is : So before the interview with Torben, here’s an outline of the Pedersen Commitment: Interview Bill: Okay, so tell me a bit about yourself, and what got you into cryptography? Torben: Well, I was studying computer science at university in Aarhus, and I just thought it was an interesting subject that was somewhere between computer science and mathematics. Bill: And so you invented a method that we now know as the Pedersen Commitment. What motivated you to do that? And how does it work? And how do you think it will be used in the future? Torben: Well, the reason I worked with this, was that I was working with verifiable secret sharing. There was, at the time, a method for doing non-interactive verifiable secret sharing based on a commitment which was unconditionally binding and computationally hiding. At the time, there was also inefficient commitments, that had the property of being unconditionally hiding, and I thought it would be nice to have a verifiable secret share where you don’t have to rely on any computational assumptions, in order to be sure that your secret is not revealed when you do a secret share. Torben: Then there was a paper which created an authentication scheme very similar to Schnorr. But it’s used a similar idea for a useful commitment. And that was kind of the combination of those two (the existing non-interactive verifiable secret sharing and the ideas form this authentication scheme), which motivated me to do verifiable secret sharing. And the commitment scheme was, of course, an important part of that because it had unconditioned hiding property, and it had the mathematical structure that was needed for the secret sharing. Bill: And it has scaled into an elliptic curve world. But with elliptic curves and discrete logs now under threat, how would you see it moving forward into a possible post-quantum crypto world? Torben: The good thing about the commitment scheme is that it is unconditional hiding. Of course, you can be sure that your private information is not leaked, even in case a quantum computer is constructed. But of course, the protocols that are using this one have to see what effect does it have if one, for example using a quantum computer, can change ones mind about a commitment. So you need to see how that would affect those protocols. Bill: So an example use of the commitment could be of a secret say someone voting in an election. So you would see when the commitment was made, and then when the vote was cast. Then the person could reveal what their votes actually was. Now it’s been extended into zero-knowledge methods to prove that you have enough cryptocurrency to pay someone without revealing the transactions. How does that world evolve where you only see an anonymized ledger, and which can scare some people, but for others that is a citizen-focused world? How do you see your commitment evolving into privacy-preserving ledgers? Torben: I go back to what we’re doing at Concordium where we have a blockchain which gives a high assurance about the privacy of the users acting on the blockchain. At the same time, using zero-knowledge proof, we set it up in such a way that designated authorities — if they under certain circumstances, for example, are given a court order — they will be able to see to link an account on the blockchain for that particular person. So, actually the zero-knowledge proofs and the commitment schemes — and all that — is used to guarantee the privacy of the users acting on the blockchain, and there are also regulatory requirements, that it must be possible to identify people who misbehave on the blockchain. Bill: Yeah, that’s a difficult thing, and it’s probably where the secret is stored. So, if the secret is stored in the citizen’s wallet, then only they can reveal that. And if the secret needs to be stored, for money laundering by an agency could hold it. Torben: Actually we do not have to store the secret of the user. But there are other keys which allow us to link the account with a particular user. That is something which only designated parties can do. So we have one party which is the identity provider with issues and identity to a user and other parties called anonymity reworkers. And those parties will have to work together in order to link an account to a user. We use zero-knowledge proofs when creating the account to assure that account is created in such a way that it is possible for you to trace back the account to the user. Bill: And in terms of zero-knowledge proofs, there is a sliding scale from highly complex methods that you would use for Monero and anonymized cryptocurrencies, to the simpler ones to Fiat Shamir implementation. And they are probably unproven in terms of their impact on performance and for security. Where is the sweet spot? What methods do you think are the best for that? Torben: I think we need to see improvements in zero-knowledge proofs in order to have really efficient blockchains and non-interactive zero-knowledge proofs on a blockchain. So I definitely think we need some work on that. There are some acceptable non-interactive zero-knowledge proofs for the moment. We are using Bulletproofs for the moment together with Shamir shares on it, in order to make it non-interactive. But I think there are some technologies like zkSnarks and zkStarks, but I think there’s room for improvement. Bill: And what do you think the key challenges within cryptography just now What do we need to be working on in the next three to five years? Torben: Yeah, so the biggest challenge, as you already mentioned, and that’s what happens if we have a quantum computer that can break the assumptions that a lot of the constructions are based on today. Whether we have a quantum computer, I don’t know, but we need to be prepared. We have some post-quantum algorithms, which I think also are quite complex, and it would be nice to have something that was more efficient and better to use. I think there’s also room for work on that aspect. Bill: And obviously, to create some toolkits that move away from an Ethernet world and where the Internet was really built on the seven-layer model — and it’s flawed. We perhaps need to rebuild on a toolkit of math, so that we actually have a solid foundation. I know that Hyperledger is starting to build these tools for developers. When we do see that rebuilding happening, and where are the toolkits going to come from? Torben: Toolkits could come from blockchain companies such as Concordium, for example. It could also come from the community with sponsored projects. If we can build up an infrastructure that allows people to use blockchains in the ledger, without trusting one particular party, so that they can create a trust, which is probably lacking on the Internet today. It’s very difficult, as with the current Internet it is very difficult to know if you can trust someone or not. I hope blockchain technology can help create an infrastructure for that. There’s a long way to go. We need good public permissionless blockchains for that, so you don’t have to rely on a particular party for this. Obviously, that is sufficient, but there’s quite some way to go. Bill: How do you change the approach of governments and industries that have been around for hundreds of years. So if you look at the legal industry, they still typically only accept wet signatures. They might have a GIF of a signature and add it to a PDF, but that’s as far as it goes. So how are we going to really transform governments and, and existing industries to really accept that digital signatures are the way to do these things? Torben: Yeah, I think it’s a bit dangerous, you know, accepting these GIFs of signatures and digital signatures which are not really cryptographically secure. I’m not a big fan of that. I’d like to see us moving to digital signatures, which are the way that we originally envisaged in the cryptographic world, and where the party who signs the signature is in control of the key which created the digital signature. I hope you’ll see a movement towards that level of security. Bill: And could you tell me a little bit about the Concordium Foundation and what’s objectives on what it hopes to achieve? Torben: So our vision is to create a public permissionless blockchain that can help to create trust across industries. We want to enable entities such as businesses and private persons, to interact or act privately on the blockchain. At the same time, it’s very important for us not to create an infrastructure, which allows criminals to misuse it, and for some money laundering problems. Thus we want to create an environment where it’s possible to identify people who misbehave or break the rules. And that is why we have this identity layer as part of our blockchain. Bill: And what got you into blockchain? Torben: I think the technology is very interesting. There’s a lot of things you said based on a lot of pretty old cryptography. There’s also new developments, for example, the zero-knowledge proofs. So there’s new and new developments or developments. So very interesting. I mean, it’s not necessarily what I was interested in, but when I did research many years ago. That’s probably what I wanted to work with. I have been working with cryptography — mostly in mostly for the financial sector for 25 years. And that’s also very interesting. There are challenges and it’s also nice to get back to the sort of basis that I worked with many years ago. Bill: You took a route into the industry but obviously you could have gone into academia and you could become a professor and have an academic research team. Torben: I think it was because I wanted to work with practical aspects of using cryptography. I’ve been in research for some years and I thought I needed to try something else. And I was very keen to see how it would be used in practice and be part of that. So that’s why I made that step. Bill: What does our digital world look like that’s made up of tokens, cryptographic tokens, consensus systems and digital identities. And you think that that world will come anytime soon that we can trade assets, we can have digital assets that can be traded. Torben: Well, it depends on what you mean by soon. I think we will have some way to go. I think the use of blockchains for trading tokens, for handling tokens, and for registering tokens, is an obvious thing, but we also need to bring value to businesses or projects. To have something that people can feel it and control. We need to make sure that information is protected the right way, even though it is registered on a public blockchain, for example.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29895933
info_outline
Inspired Edinburgh: An Interview with Professor Bill Buchanan OBE
02/11/2024
Inspired Edinburgh: An Interview with Professor Bill Buchanan OBE
Video: https://www.youtube.com/watch?v=O_kMmbvu9VM
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29895613
info_outline
Just Crypto Magic, Be A Teacher, And The King and Queen of Cybersecurity
02/11/2024
Just Crypto Magic, Be A Teacher, And The King and Queen of Cybersecurity
There short podcast on Just Magic, Be A Teacher, And The King and Queen of Cybersecurity Magic: The Silly World of Cybersecurity Giving Back What Others Have Given You … King and Queen:
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29895223
info_outline
World-leaders in Cryptography: Bruce Schneier (Feb 2024)
02/06/2024
World-leaders in Cryptography: Bruce Schneier (Feb 2024)
This seminar series runs for students in the Applied Cryptography and Trust module but invites guests from students from across the university. This seminar series runs for students on the Applied Cryptography and Trust module but invites guests from students from across the university. He has created a wide range of cryptographic methods, including Skein (hash function), Helix (stream cipher), Fortuna (random number generator), and Blowfish/Twofish/Threefish (block ciphers). Bruce has published 14 books, including best-sellers such as Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. He has also published hundreds of articles, essays, and academic papers. Currently, Bruce is a fellow at the Berkman Center for Internet and Society at Harvard University.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29830763
info_outline
A Full Diary of a Cyber Crime .. From Phishing to Profit - Part 2
12/19/2023
A Full Diary of a Cyber Crime .. From Phishing to Profit - Part 2
I’m going to show a full timeline of a Cyber Crime to show the steps that a scammer will take in order to gain funds from their target. Overall I’m interested in seeing how a scamming crime evolves to the point of profit for the scammer. https://medium.com/asecuritysite-when-bob-met-alice/a-full-diary-of-a-cyber-crime-from-phishing-to-profit-23ab53f5f58b
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29151248
info_outline
A Full Diary of a Cyber Crime .. From Phishing to Profit - Part 1
12/19/2023
A Full Diary of a Cyber Crime .. From Phishing to Profit - Part 1
I’m going to show a full timeline of a Cyber Crime to show the steps that a scammer will take in order to gain funds from their target. Overall, I’m interested in seeing how a scamming crime evolves to the point of profit for the scammer.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/29151103
info_outline
World-leaders in Cryptography: Matthew Green
10/27/2023
World-leaders in Cryptography: Matthew Green
Matthew is a cryptographer and academic at Johns Hopkins University and has designed and analyzed cryptographic systems used in wireless networks, payment systems and digital content protection platforms. A key focus of his work is in the promotion of user privacy. He has an extensive following on X/Twitter (140K followers) and his blog covers important areas of cryptography: His research has been cited over 15,000 times and includes work on Zerocash, Zerocoin and Identity Based Encryption (IBE), and more recently on privacy-aware signatures:
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/28443830
info_outline
Professor Peter Andras: Thoughts on AI, Research and Education
09/07/2023
Professor Peter Andras: Thoughts on AI, Research and Education
Professor Peter Andras is the Dean of the School of Computing, Engineering & the Built Environment. Previously, Peter was the Head of the School of Computing and Mathematics (2017 – 2021) and Professor of Computer Science and Informatics at Keele University from 2014 – 2021. Prior to this he worked at Newcastle University in the School of Computing (2002 – 2014) and the Department of Psychology (2000 – 2002). He has a PhD in Mathematical Analysis of Artificial Neural Networks (2000), MSc in Artificial Intelligence (1996) and BSc in Computer Science (1995), all from the Babes-Bolyai University, Romania. Peter’s research interests span a range of subjects including artificial intelligence, machine learning, complex systems, agent-based modelling, software engineering, systems theory, neuroscience, modelling and analysis of biological and social systems. He has worked on many research projects, mostly in collaboration with other researchers in computer science, psychology, chemistry, electronic engineering, mathematics, economics and other areas. His research projects have received around £2.5 million funding, his papers have been cited by over 2,400 times and his h-index is 25 according to Google Scholar. Peter has extensive experience of working with industry, including several KTP projects and three university spin-out companies, one of which is on the London Stock Exchange since 2007 – eTherapeutics plc. Peter is member of the Board of Governors of the International Neural Network Society (INNS), Fellow of the Royal Society of Biology, Senior Member of the Institute of Electrical and Electronics Engineers (IEEE) and member of the UK Computing Research Committee (UKCRC), IEEE Computer Society, Society for Artificial Intelligence and Simulation of Behaviour (AISB), International Society for Artificial Life (ISAL) and the Society for Neuroscience (SfN). Peter serves on the EPSRC Peer Review College, the Royal Society International Exchanges Panel and the Royal Society APEX Awards Review College. He is also regularly serving as review panel member and project assessor for EU funding agencies. Outside academia, Peter has an interest in politics and community affairs. He served as local councillor in Newcastle upon Tyne, parish councillor in Keele and stood in general elections for the Parliament. He has experience of working with and leading community organisations and leading a not-for-profit regional development consultancy and project management organisation. Ref:
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27961518
info_outline
Bill Buchanan - Which People Have Secured Our Digital World More Than Any Other?
09/03/2023
Bill Buchanan - Which People Have Secured Our Digital World More Than Any Other?
And, so, if you could pick one or two people who have contributed most to our online security, who would it be? Ron Rivest? Shafi Goldwasser? Ralph Merkle? Marty Hellman? Whitfield Diffie? Neal Koblitz? Well, in terms of the number of data bytes protected, that prize is likely to go to Joan Daemen and Vincent Rijmen, and who created the Rijndael method that became standardized by NIST as AES (Advanced Encryption Standard). If you are interested, Rijndael (“rain-doll”) comes from the names of its creators: Rijmen and Daemen (but don’t ask me about the rogue “l” at the end). And, so, Joan Daemen was awarded the Levchin Prize at the Real World Symposium conference in 2016: Now, his co-researcher, Vincent Rijmen — a Professor at KU Leuven — has been awarded the Levchin Prize at the Real-World Crypto Symposium []: This follows illustrious past winners, including Paul Kocher (for work on SSL and side-channels), Dan Coppersmith (on cryptoanalysis), Neal Koblitz and Victor Miller (for their co-invention of ECC) and Ralph Merkle (for work on digital signatures and hashing trees). Vincent’s track record in high-quality research work is exceptional and especially in the creation of the Rijndael approach to symmetric key encryption []: Before AES, we had many symmetric key encryption methods, including DES, 3DES, TwoFish, BlowFish, RC4, and CAST. But AES came along and replaced these. Overall, ChaCha20 is the only real alternative to AES, and where it is used in virtually every web connection that we have and is by far the most popular method in encrypting data. And, it has stood the test of time — with no known significant vulnerabilities in the method itself. Whilst we might use weak keys and have poor implementations, Rijndael has stood up well. AES method With AES, we use symmetric key encryption, and where Bob and Alice share the same secret key: In 2000/2001, NIST ran a competition on the next-generation symmetric key method, and Rijndael won. But in second place was Serpent, which was created by Ross Anderson, Eli Biham, and Lars Knudsen. Let’s have a look at the competition and then outline an implementation of Serpent in Go lang. In the end, it was the speed of Rijndael that won over the enhanced security of Serpent. If NIST had seen security as more important, we might now be using Serpent than Rijndael for AES. NIST created the race for AES (Advanced Encryption Standard). It would be a prize that the best in the industry would join, and the winner would virtually provide the core of the industry. So, in 1997, NIST announced the open challenge for a block cipher that could support 128-bit, 192-bit, and 256-bit encryption keys. The key evaluation factors were: Security: They would rate the actual security of the method against the others submitted. This would method the entropy in the ciphertext — and show that it was random for a range of input data. The mathematical foundation of the method. A public evaluation of the methods and associated attacks. Cost: The method would provide a non-exclusive, royalty-free basis licence across the world; It would be computationally and memory efficient. Algorithm and implementation characteristics: It would be flexible in its approach, and possibly offer different block sizes, key sizes, convertible into a stream cipher, and so on. Be ready for both hardware and software implementation for a range of platforms. Be simple to implement. Round 1 The call was issued on 12 Sept 1997 with a deadline of June 1998, and a range of leading industry players rushed to either create methods or polish down their existing ones. NIST announced the shortlist of candidates at a conference in August 1998, and which included some of the key leaders in the field, such as Ron Rivest, Bruce Schneier, and Ross Anderson (University of Cambridge) []: Australia LOKI97 (Lawrie Brown, Josef Pieprzyk, Jennifer Seberry). Belgium RIJNDAEL (Joan Daemen, Vincent Rijmen). Canada: CAST-256 (Entrust Technologies, Inc), DEAL (Richard Outerbridge, Lars Knudsen). Costa Rica FROG (TecApro Internacional S.A.). France DFC (Centre National pour la Recherche Scientifique). Germany MAGENTA (Deutsche Telekom AG). Japan E2 (Nippon Telegraph and Telephone Corporation) Korea CRYPTON (Future Systems, Inc.) USA: HPC (Rich Schroeppel), MARS IBM, RC6(TM) RSA Laboratories [try ], SAFER+ Cylink Corporation, TWOFISH (Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, Niels Ferguson) [try ]. UK, Israel, Norway SERPENT (Ross Anderson, Eli Biham, Lars Knudsen). One country, the USA, had five short-listed candidates, and Canada has two. The odds were thus on the USA to come through in the end and define the standard. The event, too, was a meeting of the stars of the industry. Ron Rivest outlined that RC6 was based on RC5 but highlighted its simplicity, speed, and security. Bruce Schneier outlined that TWOFISH had taken a performance-driven approach to its design, and Eli Biham outlined that SERPENT and taken an ultra-conservative philosophy for security in order for it to be secure for decades. Round 2 And so the second conference was arranged for 23 March 1999, after which, on 9 August 1999, the five AES finalists were announced: Belgium RIJNDAEL (Joan Daemen, Vincent Rijmen). USA: MARS IBM, RC6(TM) RSA Laboratories, TWOFISH (Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, Niels Ferguson) UK, Israel, Norway SERPENT (Ross Anderson, Eli Biham, Lars Knudsen). Canada: CAST-256 (Entrust Technologies, Inc), The big hitters were now together in the final, and the money was on them winning through. Ron Rivest, Ross Anderson and Bruce Schiener all made it through, and with half of the candidates being sourced from the USA, the money was on MARS, TWOFISH or RC6 winning the coveted prize. While the UK and Canada both had a strong track record in the field, it was the nation of Belgium that surprised some and had now pushed itself into the final []. While the other cryptography methods which tripped off the tongue, the RIJNDAEL method took a bit of getting used to, with its name coming from the surnames of the creators: Vincent Rijmen and Joan Daemen. Ron Rivest — the co-creator of RSA, had a long track record of producing industry-standard symmetric key methods, including RC2, and RC5, along with creating one of the most widely used stream cipher methods: RC4. His name was on standard hashing methods too, including MD2, MD4, MD5, and MD6. Bruce Schneier, too, was one of the stars of the industry, with a long track record of creating useful methods, including TWOFISH and BLOWFISH. Final After nearly two years of review, NIST opened up to comments on the method, which ran until May 2000. A number of submissions were taken, and the finalist seemed to be free from attacks, with only a few simplified method attacks being possible: As we can see in Table 1, the methods had different numbers of rounds: 16 (Twofish), 32 (Serpent), 10, 12, or 14 (Rijndael), 20 (RC6), and 16 (MARS). Rijndael had a different number of rounds for different key sizes, with 10 rounds for 128-bit keys and 14 for 256-bit keys. Its reduced number of rounds made it a strong candidate for being a winner. In the AES conference to decide the winner, Rijndael received 86 votes, Serpent got 59 votes, Twofish 31 votes, RC6 23 votes, and MARS 13 votes. Although Rijndael and Serpent were similar, and where both used S-boxes, Rijndael had fewer rounds and was faster, but Serpent had better security. The NIST scoring was: Conclusions AES has advanced cybersecurity more that virtually all the other methods put together. Without it, the Internet would be a rats-nest of spying, person-in-the-middle attacks, and, would be a complete mess.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27926850
info_outline
Bill Buchanan - Test-of-Time (ToT) for Research Papers: Some Papers Rocket, Some Papers Crash, and But Most Never Go Anywhere
09/03/2023
Bill Buchanan - Test-of-Time (ToT) for Research Papers: Some Papers Rocket, Some Papers Crash, and But Most Never Go Anywhere
In research, the publishing of high-quality papers is often critical for the development of a research career: “I am an academic. It’s publish or perish.” Daniel J Bernstien. But often we measure the work in terms of quality rather than quantity. One high-quality research paper is probably worth more than the millions of papers published in predatory journals. A great researcher should be able to measure the quality of their work by the known impact and contribution of their research papers, and not by citation count or journal impact factor. In fact, review papers often contribute little to the development of new methods, but are some of the most highly cited papers. A research paper thus has a life. Authors might have a dream that their work is going to fundamentally change a given field, but it ends up never being read much and withers. Overall, most papers just bob along with a few citations in a year, and where you are lucky if you get more than 10 citations. An academic often follow the impact of their papers on Google Scholar, and which can give you an idea of whether their work is rising or on the wain. If you are interested, here’s mine showing a nice exponential rise over the past few years: Some papers might rocket with many initial citations, and where researchers cite them heavily, but then either the research area just dies off with a lack of interest, or problems are found with it. Isogenies within post-quantum methods is one example of this, and where a single crack on SIDH (Supersinglar Isogeny Diffie-Hellman) stopped some of the advancements in the field []: Up to that point, isogenies were the poster child and the great hope for competing with lattice methods. While they were still slow, researchers were gearing up their research to address many of their performance weakneses. They were much loved, as they used elliptic curves, but one paper stalled the isogeny steam train. I do believe they will return strong, but it will take a while to recover from such a serious crack. Cryptography is often about reputation, and a single crack can bring the whole method down. Other papers, though, can be slow burners. The core papers in ECC (Elliptic Curve Cryptography), for example, did not take off for a few years after the work was published. When Neal Koblitz published his paper on “Elliptic curve cryptosystems” in 1987, it was hardly cited, and few people picked up the potential to replace RSA signatures. In 1997 (10 years after the publication of the paper), it is still only achieved 41 citations. But things really took off around 2005, and especially when Satoshi Nakamoto adopted ECC for Bitcoin around 2009. It now sits at nearly 400 citations per year, and where ECDSA and EdDSA have made a significant impact in replacing our cumbersome RSA methods: Test-of-Time (ToT) Award Now Chris Peikert, Brent Waters, and Vinod Vaikuntanathan (Via-kun-tan-athan) have been awarded the International Association for Cryptologic Research (IACR) Test-of-Time (ToT) Award for a paper entitled “A Framework for Efficient and Composable Oblivious Transfer” and presented at the Crypto 2008 conference [][1]: Overall, the Test-of-Time Awards is awarded to papers published over 15 years ago, with the three IACR general conferences (Eurocrypt, Crypto and Asiacrypt). The developed framework integrates “universal composability” and which provides strong security properties. Basically, a protocol P1 is secure if another protocol (P2) emulates P1, and where it is not possible to tell the two apart. It introduced a simple method of “dual-mode cryptosystem”. The work has been fundamental in creating Oblivious Transfer protocols, and which are used in Multi-Party Computation (MPC). A great advancement of the paper is in the usage of Learning with Errors (LWE) — and which is now used within lattice cryptography methods. The paper has since laid a foundation for lattice cryptography. As with the ECC method, the paper was a slow-burner [] with only 11 citations in 2008, but rose to more than 10 times that number: MPC So, let’s see if we can build a model where we can securely distribute value and then get our nodes to perform the result of a calculation. None of the nodes should be able to compute the result without the help of others, and where Trent is trusted to distribute the inputs, watch the broadcasts, and then gather the results. For this, we can use Shamir secret shares, and where a value can be split into t-from-n shares and where we need t shares to rebuild our value. So, we could distribute a 2-from-3 to Bob, Alice and Eve, and they Bob and Alice, or Alice and Eve, could rebuild the value back again. So let’s say we have two values: x and y, and we want to compute x×y. We then initially start with n parties, and where we define a threshold of t (the minimum number of shares required to rebuild any value. Initially, Trent (the trusted dealer) splits the input values of x and y into shares: Sharesx=x1,x2,…xn Sharesy=y1,y2,…yn Next, Trent sends one share of each to each of the nodes, such as xi and yi to node i. Each node then must gather at least t shares for the nodes, and then aim to add to its own share. Each node is then able to rebuild the values of x and y, and then compute x×y. Trent then receives all the results back and makes a judgement on the consensus. If we have 12 nodes, then if there are at least eight nodes that are fair, the result will be the correct one. Here is the code []: package mainimport ( "fmt" "github.com/codahale/sss" "os" "strconv" "encoding/hex")func mult(subset1 map[byte][]byte, subset2 map[byte][]byte) int { a_reconstructed := string(sss.Combine(subset1)) b_reconstructed := string(sss.Combine(subset2)) a,_ := strconv.Atoi(a_reconstructed) b,_ := strconv.Atoi(b_reconstructed) res:=a*b; return(res)}func add(subset1 map[byte][]byte, subset2 map[byte][]byte) int { a_reconstructed := string(sss.Combine(subset1)) b_reconstructed := string(sss.Combine(subset2)) a,_ := strconv.Atoi(a_reconstructed) b,_ := strconv.Atoi(b_reconstructed) res:=a+b; return(res)}func sub(subset1 map[byte][]byte, subset2 map[byte][]byte) int { a_reconstructed := string(sss.Combine(subset1)) b_reconstructed := string(sss.Combine(subset2)) a,_ := strconv.Atoi(a_reconstructed) b,_ := strconv.Atoi(b_reconstructed) res:=a-b; return(res)}func get_shares(shares map[byte][]byte , k byte) map[byte][]byte { subset := make(map[byte][]byte, k) for x, y := range shares { fmt.Printf("Share:\t%d\t%s ",x,hex.EncodeToString(y)) subset[x] = y if len(subset) == int(k) { break } } fmt.Printf("\n") return(subset)}func main() { a:= "10" b:="11" n1:=5 k1:=3 argCount := len(os.Args[1:]) if (argCount>0) {a = (os.Args[1])} if (argCount>1) {b = (os.Args[2])} if (argCount>2) {k1,_ = strconv.Atoi(os.Args[3])} if (argCount>3) {n1,_ = strconv.Atoi(os.Args[4])} n := byte(n1) k := byte(k1) fmt.Printf("a:\t%s\tb: %s\n\n",a,b) fmt.Printf("Policy. Any %d from %d\n\n",k1,n1) if (k1>n1) { fmt.Printf("Cannot do this, as k greater than n") os.Exit(0) } shares1, _:= sss.Split(n, k, []byte(a)) shares2, _:= sss.Split(n, k, []byte(b)) a_subset:=get_shares(shares1,k) b_subset:=get_shares(shares2,k) res1:=mult(a_subset, b_subset) res2:=add(a_subset, b_subset) res3:=sub(a_subset, b_subset) fmt.Printf("\na*b= %d\n",res1) fmt.Printf("a+b= %d\n",res2) fmt.Printf("a-b= %d\n",res3)} A sample run is []: a: 10 b: 11Policy. Any 3 from 5Share: 5 fe87 Share: 1 8bd2 Share: 2 16c7 Share: 2 e47a Share: 3 db58 Share: 4 1a9b a*b= 110a+b= 21a-b= -1 and: a: 9999 b: 9998Policy. Any 3 from 6Share: 1 968ada76 Share: 2 44fc9b0c Share: 3 eb4f7843 Share: 4 6d4bf67a Share: 5 1cbaf095 Share: 6 3ef251e3 a*b= 99970002a+b= 19997a-b= 1 Conclusions The paper by Peikert, Vaikuntanathan, and Waters laid the ground for some many areas, including MPC and lattice-based cryptography. After 15 years since it has been published, it has been referenced over 821 times, and is a highly recommended read. And, so, don’t measure the initial impact of a paper by the number of citations it receives after a year or two — as its time may yet be to come. For ECRs … have faith in your work … if you keep your focus, your work will get noticed. If not, you perhaps have the wrong focus or in the wrong field. References [1] Peikert, Chris, Vinod Vaikuntanathan, and Brent Waters. “A framework for efficient and composable oblivious transfer.” Annual international cryptology conference. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. [2] Koblitz, N. (1987). Elliptic curve cryptosystems. Mathematics of computation, 48(177), 203–209.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27926772
info_outline
Bill Buchanan - PQC Gets A Tombstone Notice
08/29/2023
Bill Buchanan - PQC Gets A Tombstone Notice
And, so, we are moving into one of the greatest changes that we ever see on the Internet, and where we will translate from our existing public key infrastructures towards Post Quantum Cryptography (PQC) methods. At the present time, NIST has approved one key exchange/public key encryption method (Kyber) and three digital signature methods (Dilithium, Falcon and SPHINCS+). The focus will now be on seamless integration, and where we will likely use hybrid methods initially and where we include our existing ECDH method with Kyber, and mix either RSA, ECDSA or EdDSA digital sigatures with Dilithum. Key exchange is (relatively) straightforward Overall, Kyber is fairly easy to create a hybrid key exchange method with ECDH, and where we would transmit both the ECC public key and the Kyber public key in the same packet. In fact, Google are already testing its integration in Chrome. With this, our existing key sizes are []: Type Public key size (B) Secret key size (B) Ciphertext size (B)------------------------------------------------------------------------ P256_HKDF_SHA256 65 32 65P384_HKDF_SHA384 97 48 97P521_HKDF_SHA512 133 66 133X25519_HKDF_SHA256 32 32 32X448_HKDF_SHA512 56 56 56 Thus, for P256, we have a 32-byte private key (256-bits) and a 65-byte public key (520 bits). Kyber 512 increase the key size of 1,632 bytes for the private key, and 800 bytes (6,400 bits) for the public key: Type Public key size (B) Secret key size (B) Ciphertext size (B)------------------------------------------------------------------------ Kyber512 800 1,632 768Kyber738 1,184 2,400 1,088Kyber1024 1,568 3,168 1,568 Thus, to use a hybrid key exchange method, we would include the ECC public key and the Kyber512 public key and thus have a packet which contains 832 bytes. This is smaller than the 1,500 byte limit for an IP packet and thus requires only one packet to send the public key from Bob to Alice (and vice-versa). A Hybrid method is defined here: and a test run is: Method: Kyber512-X25519 Public Key (pk) = 3BF9B5BB236AD036BA65B1B532E11927E20269D3CE74009E6C085F0D901F5CC9 (first 32 bytes)Private key (sk) = B96B644DE170BA19266AF32BFA4B3B22A4917888A2EE785C701B7252D6308573 (first 32 bytes)Cipher text (ct) = 0E54F37E171768318B45FD27FBDB08B33CD2204142C4B925BB395DA93AE26EA7 (first 32 bytes)Shared key (Bob): C0B27940D588EE1D0F8348F169BA04A48E0E7FA7DE5B8A091D5D1B59E70D577EEAC4180B076595B2EFCCE96E2271EEA3B20228FC3FD5B63114D32E9D20D9A2F2Shared key (Alice): C0B27940D588EE1D0F8348F169BA04A48E0E7FA7DE5B8A091D5D1B59E70D577EEAC4180B076595B2EFCCE96E2271EEA3B20228FC3FD5B63114D32E9D20D9A2F2Length of Public Key (pk) = 832 bytes Length of Secret Key (sk) = 1664 bytesLength of Cipher text (ct) = 800 bytes Digital Signatures and PKI is not so easy But, what will happen with the next part of the process, and where we need to digitally sign something with a private key and then prove with the public key? This is an important element in HTTPs, and where ECDH is used to exchange the symmetric key, and then digital signatures are used to verify the identity of the server. For this, we use digital certificates (X.509), and which contain the public key of the entity and which has been signed by a trusted entity (Trent). Well, at the present time, it is not quite clear yet, and a new IETF draft perhaps gives some insights []: The draft outlines how we could include two public keys in the same certificate: such as an ECC or RSA public key and a PQC public key. Unfortunately, it has been given a “Tombstone notice”, which means it will not progress. The reason for this is that it adds a PQC key — no matter if the host actually wants (or uses) it. Along with this, it does not give a mechanism for coping with two signatures on a method (with a traditional one and a PQC one), and where it is not possible to detect where one of the signatures has been removed — a stripping attack. Public key sizes for Dilithum Like it or not, the days of small public key sizes are coming to an end. In ECC, for NIST P256, we have a 32-byte (256 bit) private key, and a 64-byte (512-bit) public key. For Ed25519, we use Curve 25519, and which reduces the public key to ust 32 bytes (256 bits). For RSA 2K, we have a 256-byte private key (2,048 bits), and a 256-byte public key (2,048 bits). The equivalent security for Dililithum is Dililithum2, and which gives a much larger private key of 2,528 bytes (20,224 bits) and a public key of 1,312 bytes (10,496 bits). The Dilithium public key is thus over 20 times larger than the ECC key. This could be a major overhead in communication systems, and where more than one data packet would have to be sent in order to transmit the public key. Method Public key size Private key size Signature size Security level------------------------------------------------------------------------------------------------------Crystals Dilithium 2 (Lattice) 1,312 2,528 2,420 1 (128-bit) LatticeCrystals Dilithium 3 1,952 4,000 3,293 3 (192-bit) LatticeCrystals Dilithium 5 2,592 4,864 4,595 5 (256-bit) LatticeFalcon 512 (Lattice) 897 1,281 690 1 (128-bit) LatticeFalcon 1024 1,793 2,305 1,330 5 (256-bit) LatticeSphincs SHA256-128f Simple 32 64 17,088 1 (128-bit) Hash-basedSphincs SHA256-192f Simple 48 96 35,664 3 (192-bit) Hash-basedSphincs SHA256-256f Simple 64 128 49,856 5 (256-bit) Hash-basedRSA-2048 256 256 256ECC 256-bit 64 32 256 A hybrid scheme is defined here: For our existing signatures, we have: Method Public key size (B) Private key size (B) Signature size (B) Security level------------------------------------------------------------------------------------------------------Ed25519 32 32 64 1 (128-bit) EdDSAEd448 57 57 112 3 (192-bit) EdDSAECDSA 64 32 48 1 (128-bit) ECDSARSA-2048 256 256 256 1 (128-bit) RSA And in using a hybrid approach, we increase the signature size of 64 bytes or Ed25519 to 2,484 bytes — a 38-fold increase in size: Method Public key size Private key size Signature size Security level------------------------------------------------------------------------------------------------------Crystals Dilithium2-X25519 2,560 1,344 2,484 1 (128-bit) LatticeCrystals Dilithium3-X25519 4,057 2,009 3,407 3 (192-bit) LatticeCrystals Dilithium 2 (Lattice) 1,312 2,528 2,420 1 (128-bit) LatticeCrystals Dilithium 3 1,952 4,000 3,293 3 (192-bit) LatticeCrystals Dilithium 5 2,592 4,864 4,595 5 (256-bit) LatticeFalcon 512 (Lattice) 897 1,281 690 1 (128-bit) LatticeFalcon 1024 1,793 2,305 1,330 5 (256-bit) LatticeSphincs SHA256-128f Simple 32 64 17,088 1 (128-bit) Hash-basedSphincs SHA256-192f Simple 48 96 35,664 3 (192-bit) Hash-basedSphincs SHA256-256f Simple 64 128 49,856 5 (256-bit) Hash-based And, so, what about using SPHINCS+? That has a 32-byte public key. The downside is that Sphincs SHA256–128f requires a 17,088-byte signature. This would also overload the communication channel and require over 11 packets to send a single signature to prove the identity of a Web site. It is highly unlikely that SPHINCS+ will be used to replace RSA and ECC signatures in ECDH, but where it would be used in applications that did not have a communications channel. Conclusions And, so, the double public key draft has been sent back, and we are awaiting the next iteration. Learn more about PQC here:
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27879543
info_outline
Bill Buchanan - Be More BBN Than IBM
08/25/2023
Bill Buchanan - Be More BBN Than IBM
Please excuse me for using IBM in the title — I have the greatest of respect for a company that has continued to lead and innovate over the past six decades (and who have existed for over a century). The point of this article is to showcase where you, your team or your company have a deep passion for doing something great. For this, we go back to the roots of one of the greatest inventions in the history of humankind: The Internet. In fact, we would probably not have the Internet without one magical little company (BBN) and the vision of one person (Larry Roberts). At the time, most had the word “FAILURE” written over the ARPANET project, and if it had failed, the Internet would probably never happen. Think about that for a few minutes. If we go right back to the creation of ARPANET, it was Larry Roberts who published an RFQ (Request For Quote) to interested companies. The task was to build an IMP (Interface Message Processor) and route data across an interconnected network, and this connect disparate computer systems together. While most things at the time focused on cumbersome and centralised circuit-switching, Larry wanted to use a packet-switched approach. And, so, the big companies prepared their bids and did their usual tendering processing — and basically took what they had, would just deliver to the requirements. Few of them had any faith in what was being built and could only see this as another failed government research project that went nowhere. And to integrate with academia, too, was always going to be a challenge, as academics would want to build something that protected their resources while enabling them to extend their research. In fact, IBM's solution was to use the large System 360 mainframe computer to undertake the task of routing data. Anyone who has ever bidded for a government contract will know that when you submit it, you think you will win it, but this decays over time, and where you often move to a state of knowing that you will not get it. But, while companies like DEC, Raytheon and IBM failed to see how the creation of the IMP would go anywhere, there was one company that put its heart and soul into the bid: BBN. In fact, it is thought that they spent around six months of time developing the bid. For this, they did a full investigation into the working of the IMP, and had even investigated the hardware and code that it would require. And, so, while they were honest in saying that it was going to be a major challenge, they then laid out the route to the solution and shared their insights. This showed to Larry that, like him, this was not just another project but one that would match the vision of the company. And, for such a project, most of the companies defined long chains of authority and management, whereas BBN’s approach was to have a single point of focus, and a simplified management approach. Basically, there was a single contact for every question, rather than long lines of delegated responsibility. At, the time, people used to say, “No one gets fired by buying IBM”, so Larry was laying his whole reputation on the line by going with this small company, which had little in the way of resources to compete with IBM or DEC. But, they had passion and vision and wanted the contract with all their lives. The company were successful in other ways and did not need the grant to sustain them- but they knew its importance. A failure of this project, and there would be no more building of packet-switched network — and possibly no future Internet. And, so, they invested much more time than virtually all the bidders put together. In fact, BBN were actually the first to have an Autonomous System Number (AS1). This is a special number which makes routing on the Internet so much easier, as we just need to know which autonomous system to give our data too, in order to get it routed to the destination. This can be an intermediatory route through the AS, or where the AS hosts the target device. The choice of an AS approach — using BGP (Border Gateway Protocol) — has really been one of the most fundamental elements in building the Internet at scale. While not perfect, it works! BBN also strived to secure BGP, as it was fundamentally important that no single entity — especially a malicious one — would take over the routing of the Internet. In fact, BBN invented the link-state routing method, and which allowed the “best” route to be discovered to a destination — through the intercommunication of routing tables from devices. Now, Level 3 Communications uses AS1. BBN, too, were one of the first companies to be an internet service provider and were the second organisation in the world to register a domain name (on 24 April 1985 with bbn.com): Domain Name: bbn.comRegistry Domain ID: 4240240_DOMAIN_COM-VRSNRegistrar WHOIS Server: whois.corsearch.domainsRegistrar URL: Updated Date: 2023-05-09T19:30:10ZCreation Date: 1985-04-24T05:00:00ZRegistrar Registration Expiration Date: 2024-04-25T04:00:00ZRegistrar: Corsearch Domains LLCRegistrar IANA ID: 642Registrar Abuse Contact Email: [email protected] Abuse Contact Phone: +1.8007327241Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited And, here is the BBN Web page from 1985 []: Why were BBN so successful? They had a passion and a drive, and they recruited the best talent around. In fact, BBN was sometimes know as the “the third university” in Cambridge, alongside Harvard and MIT. The key to any innovative company’s success is their HR function, and in making sure they get the best talent around. One great engineer with a passion and drive can often trump teams of hundreds. But, they must want to do the work — so must be given stimulating and challenging roles. After innovating in so many areas, in 2009, BBN became a wholly owned subsidiary of one of the companies they beat off for the ARPANET contract: Raytheon. Conclusion And, so, if you really want something, put your heart and soul into it, and show the grant/contract reviewers that this is all just part of your vision to build a better world.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27848427
info_outline
Bill Buchanan - A Bluffer’s Guide To Encryption In The Cloud: Top 100
08/21/2023
Bill Buchanan - A Bluffer’s Guide To Encryption In The Cloud: Top 100
In cybersecurity, the teaching of Cloud security is often weak. So, here are my Top 100 things about encryption in the Cloud. I’ve focused on AWS, but Azure is likely to also be applicable. Keys are created in the AWS KMS (Key Management Store). In Azure, this is named KeyVault. The cost of using a key in KMS is around $1/month (prorated hourly). When a key is disabled, it is not charged. With AWS KMS, we use a shared customer HSM (Hardware Security Module), and with AWS CloudHSM it is dedidated to one customer. For data at rest, with file storage, we can integrate encryption with Amazon EBS (Elastic Block Storage) and Amazon S3. Amazon EBS drives are encrypted with AES-256 with XTS mode. For AWS-managed keys, a unique key is used for every object within S3 buckets. Amazon S3 uses server-side encryption to store encrypted data. The customer can use client-side encryption to encrypt data before it is stored in the AWS infrastructure. AWS uses 256-bit Advanced Encryption Standard Galois/Counter Mode (AES-GCM) for its symmetric key encryption. In AWS S3, by default, all the objects are encrypted. A customer can use client-side encryption to encrypt data before it goes into the AWS infrastructure. For data at rest, for databases, we can integrate encryption with Amazon RDS (AWS’s relational database service) and Amazon Redshift (AWS’s data warehousing). For data at rest, we can integrate encryption into ElastiCache (AWS’s content caching service), AWS Lambda (AWS’s serverless computing service), and Amazon SageMake (AWS’s machine learning service). Keys are tokenized and have an ARN (Amazon Resource Names) and alias. An example ARN for a key is arn:aws:kms:us-east-1:103269750866:key/de30e8e6-c753–4a2c-881a-53c761242644, and an example alias is “Bill’s Key”. Both of these should be unique in the user’s account. To define a KMS key, we can either use its key ID, its key ARN, its alias name, or alias ARN. You can link keys to other AWS Accounts. For this, we specify in the form of “arn:aws:iam::[AWS ID]:root”, and where AWS ID is the ID of the other AWS account. To enhance security, we can use AWS CloudHSM (Hardware Security Module). For simpler and less costly solutions, we typically use AWS KMS (Key Management Solution). For CloudHSM, we pay per hour, but for KMS, we just pay for the usage of the keys. The application of the keys is restricted to defined services. Key identifiers and policies are defined with a JSON key-value pair for data objects. Each key should have a unique GUID, such as “de30e8e6-c753–4a2c-881a-53c761242644”. Users are identified and roles are identified with an ARN, such as : “arn:aws:iam::222222:root”. With the usage of keys we have Key Administrative Permission and a Key Usage policies. There is an explicit denial on a policy if there is not a specific allow defined in a policy. For key permissions, we have fields of “Sid” (the descriptive name of the policy), “Effect” (typically “Allow”), Principal (the ARN of the user/group), “Action” (such as Create, Disable and Delete) and “Resource”. A wildcard (“*”) allows or disallows all. To enable a user of “root” access to everything with a key would be : “Sid”: “Enable IAM User Permissions”, “Effect”: “Allow”,“Principal”: {“AWS”: “arn:aws:iam::22222222:root”},“Action”: “kms:*”, “Resource”: “*”}. The main operations within the KMS are to encrypt/decrpyt data, sign/verify signatures, export data keys, and generate/verify MACs (Message Authentication Codes). Key are either AWS managed (such as for the Lambda service), Customer managed keys (these are created and managed by the customer). Custom key stores are where the customer has complete control over the keys). The main use of keys are for EC2 (Compute), EBS (Elastic Block Storage) and S3 (Storage). AES symmetric keys or an RSA key pair are used to encrypt and decrypt. RSA uses 2K, 3K or 4K keys, and with either “RSA PCKS1 v1.5” or “RSA PSS” padding. RSA PCKS1 v1.5 padding is susceptible to Bleichenbacher’s attack, so it should only be used for legacy applications, and for all others, we should use RSA PSS. For RSA, we can use a hashing method of SHA-256, SHA-384 or SHA-512. In RSA, we encrypt with the public key and decrypt with the private key. For signatures, we can use either RSA or ECC signing. For RSA, we have 2K, 3K, or 4K keys, whereas ECC signing uses NIST P256, NIST P384, NIST P521, and SECG P256k1 (as used in Bitcoin and Ethereum). For MACs (Message Authentication Codes), Bob and Alice have the same shared secret key and can authenticate the hash version of a message. In the KMS, we can have HMAC-224, HMAC-256, HMAC-384 and HMAC-512. KMS uses hardware security modules (HSMs) with FIPS 140–2 and which cannot be accessed by AWS employees (or any other customer). Keys will never appear in an AWS disk or backup, and only existing the memory of the HSM. They are only loaded when used. Encryption keys can be restricted to one region of the world (unless defined by the user). With symmetric keys, the key never appears outside the HSM, and for asymmetric keys (public key encryption), the private key stays inside the HSM, and only the public key is exported outside. AWS CloudWatch shows how and when the encryption keys are being used. The minimum time that can be set for a key to be deleted is seven days (and up to 30 days maximum). An organisation can also create its own HSM with the CloudHSM cluster. When a key is then created in KMS, it is then stored in the cluster. The usage of encryption keys should be limited to a minimal set of service requirements. If possible, separate key managers and key users. With a key management (KEY_ADMINISTRATOR) role, we typically have the rights to create, revoke, put, get, list and disable keys. The key management role will typically not be able to encrypt and decrypt. For a key user (KEY_WORKER) role, we cannot create or delete keys and typically focus on tasks such as encrypting and decrypting. Hae a rule of minimum access rights, and simplify user access by defining key administration and usage roles. Users are then added to these roles. Avoid manual updates to keys and use key rotation. The system keeps track of keys that are rotated and can use previously defined ones. The default time to rotate keys is once every year. Key rotation shows up in the CloudWatch and CloudTrail logs. KMS complies with PCI DSS Level 1, FIPS 140–2, FedRAMP, and HIPAA. AWS KMS is matched to FIPS 140–2 Level 2. AWS CloudHSM complies with FIPS 140–2 Level 3 validated HSMs. AWS CloudHSM costs around $1.45 per hour to run, and the costs end when it is disabled or deleted. The CloudHSM is backed-up every 24 hours, and where we can cluster the HSMs into a single logical HSM. CloudHSM can be replicated in AWS regions. AWS KSM is limited to the popular encryption methods, whereas the CloudHSM can implement a wider range of methods. The CloudHSM can support methods such as 3DES with AWS Payment Cryptography. This complies with payment card industry (PCI) standards, such as PCI PIN, PCI P2PE, and PCI DSS. In the CloudHSM for payments, we can generate CVV, CVV2 and ARQC values, and where sensitive details never exist outside the HSM in an unprotected form. With the CloudHSM, we have a command line interface where we can issue commands, and is named CloudHSM CLI. Within the CloudHSM CLI, we can use the genSymKey command to generate symmetric key within the HSM, such as where -t is a key type (31 is AES), -s is a key size (32 bytes) and -l is the label: genSymKey -t 31 -s 32 -l aes256 With genSymKey the key types are: 16 (Generic Secret), 18 (RC4), 21 (Triple DES), and 31 (AES). Within the CloudHSM CLI, we can use the genRSAKeyPair command to generate an RSA key pair, such as where -m is the modulus and -e is the public exponent: genRSAKeyPair -m 2048 -e 65537 -l mykey AWS CloudHSM is integrated with AWS CloudTrail, and where we can track user, role, or an AWS service within AWS CloudHSM. With AWS Payments Cryptography, the 2KEY TDES is Two-key Triple DES and has a 112-bit equivalent key size. The Pin Encryption Key (PEK) is used to encryption PIN values and uses a KEY TDES key. This can store PINs in a secure way, and then decrypt them when required. S3 buckets can be encrypted either with Amazon S3-managed keys (SSE-S3) or AWS Key Management Service (AWS KMS) keys. There is no cost to use SSE keys. For symmetric key encryption, AWS uses envelope encryption, and where a random key is used to encrypt data, and then the key is encrypted with the user’s key. AWS should not be able to access the key used for the encryption. The default in creating an encryption key is for it only be to used in a single region, but this can be changed to multi-region, and where the key will be replicated across more than one region. In AWS, a region is a geographical area, and which is split into isolated locations. US-East-1 (N.Virginia) and US-East-2 (Ohio) are different regions, while us-east-1a, us-east-1b and us-east-1c are in the same region. A single region key the US-East-1 region would replicate across eu-east-1a, eu-east-1b and eu-east-1c, and not to eu-east-2a, eu-east-2b and eu-east-2c. When creating a key, you can either create in the KMS, import a key (BYOK — bring your own key), create in the AWS CloudHSM, or create in an external key store (HYOK — hold you own key). For keys stored on-premise we can use an external key store (XKS) — this can be defined as Hold Your Own Keys (HYOKs), and where and where no entity in AWS will able to read any of the encrypted data. []. You can BYOK (bring your own key) with KMS, and import keys. KMS will keep a copy of this key. With XKS, we need a proxy URI endpoint, with the proxy credentials of an access key ID, and secret access key. To export keys from AWS CloudHSM, we can encrypt them with an AES key. This is known as key wrapping, as defined in RFC 5648 (for padding with zeros) or RFC 3394 (without padding). A strong password should always be used for key wrapping. AWS encryption operations can either be conducted from the command line or within API, such as with Python, Node.js or Golang. With KMS, the maximum data size is 4,096 bytes for a symmetric key, 190 bytes for RSA 2048 OAEP SHA-256, 318 bytes for RSA 3072 OAEP SHA-256, ad 446 bytes for RSA 4096 OAEP SHA-256. An example command to encrypt a file for 1.txt with symmetric key encryption is: aws kms encryp --key-id alias/MySymKey --plaintext fileb://1.txt --query CiphertextBlob --output text > 1.out To decrypt a file with symmetric key encryption, an example with 1.enc is: aws kms decrypt --key-id alias/BillsNewKey --output text --query Plaintext --ciphertext-blob fileb://1.enc > 2.out In Python, to integrate with KMS, we use the Boto3 library. The standard output of encrypted content is in byte format. If we need to have a text version of ciphertext, we typically use Base64 format. The base64 command can be used to convert byte format in Base64, such as with: $ base64 -i 1.out — decode > 1.enc The xxd command in the command line allows the cipher text to be dumped to a hex output and can then be edited. We can then convert it back to a binary output with: An example piece of Python code for encrypting a plaintext message with the symmetric key in Python is: ciphertext = kms_client.encrypt(KeyId=alias,Plaintext=bytes(secret, encoding=’utf8') An example piece of Python code to decrypt some cipher text (in Base64 format) is: plain_text = kms_client.decrypt(KeyId=alias,CiphertextBlob=bytes(base64.b64decode(ciphertext))) To generate an HMAC signature for a message in the command line, we have the form of: aws kms generate-mac --key-id alias/MyHMACKey --message fileb://1.txt --mac-algorithm HMAC_SHA_256 --query Mac > 4.out To verify an HMAC signature for a message in the command line, we have the form of: aws kms verify-mac -key-id alias/MyHMACKey -message fileb://1.txt \ -mac-algorithm HMAC_SHA_256 -mac fileb://4.mac To create an ECDSA signature in the command line, we have the form of: aws kms sign -key-id alias/MyPublicKeyForSigning -message fileb://1.txt -signing-algorithm ECDSA_SHA_256 -query Signature > 1.out To verify an ECDSA signature in the command line, we have the form of: aws kms verify -key-id alias/MyPublicKeyForSigning -message fileb://1.txt -signature fileb://1.sig -signing-algorithm ECDSA_SHA_256 To encrypt data using RSA in the command line, we have the form of: aws kms encrypt -key-id alias/PublicKeyForDemo -plaintext fileb://1.txt -query CiphertextBlob -output text -encryption-algorithm RSAES_OAEP_SHA_1 > 1.out To decrypt data using RSA in the command line, we have the form of: aws kms decryptb -key-id alias/PublicKeyForDemo -output text -query Plaintext -ciphertext-blob fileb://1.enc -encryption-algorithm RSAES_OAEP_SHA_1 > 2.out To sign data using RSA in the command line, we have the form of: aws kms sign --key-id alias/MyRSAKey --message fileb://1.txt --signing-algorithm RSASSA_PSS_SHA_256 --query Signature --output text > 1.out To verify data using RSA in the command line, we have the form of: aws kms verify --key-id alias/MyRSAKey --message fileb://1.txt — signature fileb://1.sig --signing-algorithm RSASSA_PSS_SHA_256 You cannot encrypt data with Elliptic Curve keys. Only RSA and AES can do that. Elliptic Curve keys are used to sign data. If you delete an encryption key, you will not be able to decrypt any ciphertext that uses it. We can store our secrets, such as application passwords, in the secrets manager. An example of a secret name of “my-secret-passphrase” and a secret string of “Qwery123” we can have: aws secretsmanager create-secret --name my-secret-passphrase --secret-string Qwerty123 In China regions, along with RSA and ECDSA, you can use SM2 KMS signing keys. In China Regions, we can use SM2PKE to encrypt data with asymmetric key encryption. Find out more here:
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27806301
info_outline
Bill Buchanan - Top 101 Tips for a PhD student and ECR
08/18/2023
Bill Buchanan - Top 101 Tips for a PhD student and ECR
Well, here are a few tips for PhD students and ECR (Early Career Researchers): Enjoy doing research. It is fun and one of the few times in your career when it is solely your work. To do a PhD is a privilege and not a chore. You will likely look back on it as one of the most useful things you did in your whole career. You will always hit a dip in your research. Know when that is happening, and find ways out of it. Change something in your approach. Re-ignite yourself with new topics or methods. Find a great new paper that has just been published. Fight the dip! Two years of a PhD pass by fast. Be ready for the “last year of research” spike. We often do research to repeat what others have done and add our little bit. You can’t add your little bit unless you have repeated the work of others. Validate and verify your work before you evaluate it. One slip, and everything can fall apart. Most people have flaws in initial version of their work, so don’t worry if you find flaws, it’s all part of refinement of your work. We are human, by the way! Be able to show an external person the work you have done in validating that what you have is correct. Always be ready to point to peer review work to show that something is correctly defined. Doodles with pen and paper are great for getting your mind in gear. Have a thick skin — both from your supervisors, others around you, and, most of all, peer reviewers and your external examiner. Most peer reviewers are trying to help you, while others are just nasty for the sake of it or have not created the paper that they wanted. Try to spot the bad/nasty reviewer and focus on the helpful reviewers. Few people see your failures, but most will see your successes. Know your successes when they arrive, and write them down as your progress. At the end of your work, you should be able to show the successes you had along the way. Have a vision for your work, and continually refine it. Define your own beliefs, ethics and standards for your work and stick to these, such as “I will not release drafts to review, until I have fully read them”, “I will return updates to drafts of comments from my supervisors within one week”, and “I will not publish in poor quality outlets”. Agree these with your supervisory team, and get them to commit to things from their side. Define missions within your work and strive for these, and when that mission is achieved, go on to the next one (unless your get to the end, of course). Don’t end up just being theoretical. A core part of a PhD is doing practical work, too. Make sure you code and experiment. Don’t spend one year doing a literature review. Get coding and run experiments. A thesis is not a chronological diary. It should be written with an aim to show some new novely or knowledge, and not the sequence of things you did in your research. Throughout your work, especially in the 2nd and 3rd year of a PhD, continually run small experiments and get some results. Have a hypothesis about experiments, and prove or disprove this. Know the top people in your field, and be able to quote their work. Be inspired by other researchers. Be humble about your own work, and help others. Ask for advice from others where your supervision team lack skills, such as contacting pure mathematicians or physicists. Don’t be shy in saying that you don’t understand something. Don’t ever copy and paste work from others into your own work. Rephrase in your own words. Don’t use AI tools for descriptions. The reader will typically spot these — as the writing style often changes. Be consistent in your writing style. Read the work of others — especially great science/technical writers — and understand the methods they use to engage readers. Define simple, practical and useful abstractions of the techniques you are defining. Abstract your work into other areas and get them to think in other ways around the methods you are defining … “let’s think about the little boy who put his finger in the dam; if we had a mathematical equation for this, we would …” Many would define this as, “Explain it to a smart 12-year-old child”. Explain your work to your family and friends. If they can’t understand the problem and your solution, refine it until they can. Always be ready to give an elevator pitch … you have two minutes in a lift with Bill Gates and need to define the problem, your solution, and the potential. Know the potential impact of your work. Is it technical advancement? Is it social change? If everything worked well, and you did invent an amazing new widget, what you be the best outcome? A tech unicorn? Saving 1,000s of lives? Reducing carbon emissions? Improving people’s lives? Protect your IP when you need to. Patents are one way to do this, so just don’t blindly publish every you have. If you read papers and do not quite understand how the method works, reach out to the writers of the paper, and ask questions or pose ideas. They might not reply, but if they do, they may help you with your thoughts. Build a network of contacts outside your university, and be part of a community that shares knowledge. Supervise undergraduate students for their dissertations — but be considerate, and don’t expect them to be working at a PhD level. Know why you are doing research and your end objective. Define whether the PhD is an end goal or that it is defining the start of a research career. Plan your research career and aim for the job you hope to get in the future. Don’t add your name to poor-quality work … you will get a bad reputation. Know what esteem looks like in your area, and try and build it. Avoid publishing work which you are not proud of. If it is mainly your work, you must be the first author on the paper. Don’t use the first person of “I” or “me” in any publication or thesis, unless you are giving a personal statement of something. In reporting on your research progress, showcase that you can summarise well, and show examples of your research writing without over doing it. Learn a new method every day/week. Don’t just read about a method; try and implement it in code, and see if it works. An abstract is not an introduction! The creation of an abstract is an art and is a distilled version of the thesis/paper. The title is the first thing that someone sees, so get it right! The abstract is the second thing that someone sees, so make sure that it is beautifully crafted and that the reader can finish there and know all about your work. Review, rewrite, review, rewrite, review, rewrite your abstract, and fit it into one page. The introduction and conclusions of each chapter are like bread in a sandwich. Make sure they hold the sandwich together. Review your introductions to chapters, and bring your reader back to the focus of the thesis and what you are going to show them. An introduction to a chapter should be less than one page, otherwise, it is too rambling. Get on point as to what a chapter intends to do, and get rid of anything that deviates away from that. Use appendices to park material that just does fit in a chapter, but the remainder to reference them. Few people ever read an appendix, so they possibly need less rigour in their structure and presentation. In a thesis, every word matters. Get rid of words that are not required. Enjoy some downtime, find a nice space, and properly read a paper. If possible, spend 2–4 hours just reading a paper without distractions. Find a buddy, read a paper together, and discuss it. This could be your supervisor, but you may find that they do not read the paper. Make sure that your supervisor checks your work for accuracy. If they do not, get someone else you trust to check the work. Do not submit papers without knowing that you have checked fully for typos and bad grammar. It is a sign of a weak research team that a paper is full of annoying typos and an easy rejection. Make sure your papers have all the right features so that you will not get a rejection on the layout of a paper. Enough references? No typos? Aim and contribution defined? The literature review covered? The method defined clearly? Results will present? Conclusions bring back the main contrition and significant result? Read your work aloud, and if you stumble on the words … rewrite it. Be kind to your read, and break up long runs of text with diagrams. Know your target reader(s) and their knowledge. Consider adding a theory/background chapter if you feel they need it, but also allow them to skip it if they know the area. A literature review is full of references. Virtually every paragraph should have at last one reference — otherwise, it is not a literature review. If you can, avoid the same reference being used continually for your literature … otherwise you are outlining someone else's work and not yours. Don’t add your own analysis of methods in the literature review; leave that for later, such as in the conclusions. Everything in the literature review is the basis of published work. Avoid poor quality sources … paper mills, blog posts, and social media quotes (unless your whole thesis is focused on this). Papers published in paper mills with poor standards should never be used as they have not been rigorously reviewed. A picture is worth many words. Be kind to your reader, and abstract your thoughts with nice (and simple) diagrams that are not copied from others but your own thoughts on the topic, and which link to the narrative. In fact, draw your chapter with pictures, first, and then write around these pictures. In your diagrams, avoid small text, poor contrast, and too much complexity. Remember to add a reference in the text to every figure and every table. These references should appear before the figure or table. Don’t break your text with a figure. Make sure it floats to the bottom or the top of the next page. A chapter should be between 15 and 25 pages. If it is longer, split the chapter. If it is shorter, consider merging with another chapter. The best PhD thesis’ has a core around five themes: Introduction; Literature Review; Method; Evaluation; and Conclusions. Be up-front about your contribution, and don’t overclaim that you have solved every problem in the area. Be humble, and be open about whose work you build on. Don’t have long paragraphs of text … give your reader a break, and them up with paragraphs (but don’t make them too short, too). A rule of thumb — a paragraph should be at least 2–3 sentences and probably less than half a page of text. Read, revise, read, revise, read, revise … get into a spirit of continually reading your work. Trace relevant recent publications back to the classic paper, which started a whole field of enquiry, and read that. Your writing skills will improve over time. Go back over things you have written in the past, and rewrite them. Speak to the reader as if they were in the same room as you. The first paper of a paper or thesis is the most important part. Tell the reader the significance of the problem and why they should be interested. Hit them with facts and figures that are significant. Up-front, tell them what you are going to show them … it’s not a secret, and it is too late — at the end — to show them. Avoid Microsoft Word wherever possible, and use LaTeX (such as with Overleaf). Once you get over the barrier of learning to create a mark-up document, your production will be so much higher, and you will produce nicely defined papers in an instant. Use GitHub to store your code and documents. But, remember to keep it private and only share it with trusted people. Link your LaTeX document to GitHub, and regularly back up. If you can, enable version control on your document, and keep a trace of editing updates. Rather than sharing a draft paper in a PDF with your supervisors, send them a share of the Overleaf document. Initially, in the first stages of your research, ask supervisors to clearly mark up typos, bad grammar and mistakes. At later stages, it is probably acceptable to allow them to update without highlighting. When you read a great paper, mark it up with a highlighter (either paper-based on highly on PDF) … show your supervisor how detailed you read a paper, and discuss these points with them. Keep a paper log book of your work, and write down your thoughts and ideas as you go along. Remember to put the date on it and get it signed by your supervisor on a regular basis — especially when you have a breakthrough. Think about the best way to present results and try to aggregate them together rather than having long sequences of diagrams and tables. A single table is often the best way to bring all your results together. A graph has an x-axis and a y-axis — make sure you label these correctly. Microsoft Excel is often poor at drawing graphs. Try to properly present at a quality which could be publishable. Figure labels go below the diagram, and table labels go above the table. Avoid summaries at any point in a thesis. This can just annoy the reader, as they have just read all the work. Leave your summary to the end of a chapter. Be focused on your conclusions, and recap the main things you are taking forward and what you are rejecting. A good conclusion can be fitted into less than one page. The tail-end of the thesis is the conclusion and the future work. Bring back the problem statement and your objectives, and tell the reader how the thesis addresses these, and the main significance of your work. Remember to conclude all parts of your thesis, too. And, be humble, and show that you have not solved everything in the area and where your work can be improved. The future work is your chance to shine, and where others can follow your work. Get it right, and help others. Get enterprise training early on, and understand how your work could be used by who and why? If you can, do some maths. Layout your maths properly, and, for example, don’t use “*” for multiplication. Use a proper LaTeX equation layout for these. Remember to also use in-text equation markup too, and name and describe every variable used. Store the data from all your experiments, as it may be asked for in the future. Write blogs and do some public engagement, and modify your writing and presentation style to suit. If you are an ECR, don’t lose the skills you picked up on your PhD … still read papers and have your own vision and mission statements. Go deep in your research rather than surface learning. Get some textbooks, and read about the background theory.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27788379
info_outline
Bill Buchanan - 100 Interesting Things to Learn About Cryptography
08/17/2023
Bill Buchanan - 100 Interesting Things to Learn About Cryptography
Here are my 100 interesting things to learn about cryptography: For a 128-bit encryption key, there are 340 billion billion billion billion possible keys. [Calc: 2**128/(1e9**4)] For a 256-bit encryption key, there are 115,792 billion billion billion billion billion billion billion billion possible keys. [Calc: 2**256/(1e9**8)] To crack a 128-bit encryption with brute force using a cracker running at 1 Teracracks/second, will take — on average — 5 million million million years to crack. Tera is 1,000 billion. [Calc: 2**128/100e9/2/60/60/24/365/(1e6**3)] For a 256-bit key this is 1,835 million million million million million million million million million years. For the brute force cracking of a 35-bit key symmetric key (such as AES), you only need to pay for the boiling of a teaspoon of energy. For a 50-bit key, you just need to have enough money to pay to boil the water for a shower. For a 90-bit symmetric key, you would need the energy to boil a sea, and for a 105-bit symmetric key, you need the energy to boil and ocean. For a 128-bit key, there just isn’t enough water on the planet to boil for that. Ref: . With symmetric key encryption, anything below 72 bits is relatively inexpensive to crack with brute force. One of the first symmetric key encryption methods was the LUCIFER cipher and was created by Horst Feistel at IBM. It was further developed into the DES encryption method. Many, at the time of the adoption of DES, felt that its 56-bit key was too small to be secure and that the NSA had a role in limiting them. With a block cipher, we only have to deal with a fixed size of blocks. DES and 3DES use a 64-bit (eight-byte) block size, and AES uses a 128-bit block size (16 bytes). With symmetric key methods, we either have block ciphers, such as DES, AES CBC and AES ECB, or stream ciphers, such as ChaCha20 and RC4. In order to enhance security, AES has a number of rounds where parts of the key are applied. With 128-bit AES we have 10 rounds, and 14 rounds for 256-bit AES. In AES, we use an S-box to scramble the bytes, and which is applied for each round. When decrypting, we have the inverse of the S-box used in the encrypting process. A salt/nonce or Initialisation Vector (IV) is used with an encryption key in order to change the ciphertext for the same given input. Stream ciphers are generally much faster than block cipers, and can generally be processed in parallel. With the Diffie-Hellman method. Bob creates x and shares g^x (mod p), and Alice creates y, and shares g^y (mod p). The shared key is g^{xy} (mod p). Ralph Merkle — the boy genius — submitted a patent on 5 Sept 1979 and which outlined the Merkle hash. This is used to create a block hash. Ralph Merkle’s PhD supervisor was Martin Hellman (famous as the co-creator of the Diffie-Hellman method). Adi Shamir defines a secret share method, and which defines a mathematical equation with the sharing of (x,y), and where a constant value in the equation is the secret. With Shamir Secret Shares (SSS), for a quadratic equation of y=x²+5x+6, the secret is 6. We can share three points at x=1, x=2 and y=3, and which gives y=12, y=20, and y=20, respectively. With the points of (1,12), (2,20), and (3,20), we can recover the value of 6. Adi Shamir broke the Merkle-Hellman knapsack method at a live event at a rump session of a conference. With secret shares, with the highest polynomial power of n, we need n+1 points to come together to regenerate the secret. For example, y=2x+5 needs two points to come together, while y=x²+15x+4 needs three points. The first usable public key method was RSA — and created by Rivest, Shamir and Adleman. It was first published in 1979 and defined in the RSA patent entitled “Cryptographic Communications System and Method”. In public key encryption, we use the public key to encrypt data and the private key to decrypt it. In digital signing, we use the private key to sign a hash and create a digital signature, and then the associated public key to verify the signature. Len Adleman — the “A” in the RSA method — thought that the RSA paper would be one of the least significant papers he would ever publish. The RSA method came to Ron Rivest while he slept on a couch. Martin Gardner published information on the RSA method in his Scientific American article. Initially, there were 4,000 requests for the paper (which rose to 7,000), and it took until December 1977 for them to be posted. The security of RSA is based on the multiplication of two random prime numbers (p and q) to give a public modulus (N). The difficulty of RSA is the difficulty in factorizing this modulus. Once factorized, it is easy to decrypt a ciphertext that has been encrypted using the related modulus. In RSA, we have a public key of (e,N) and a private key of (d,N). e is the public exponent and d is the private exponent. The public exponent is normally set at 65,537. The binary value of 65,537 is 10000000000000001 — this number is efficient in producing ciphertext in RSA. In RSA, the ciphertext is computed from a message of M as C=M^e (mod N), and is decrypted with M=C^d (mod N). We compute the the private exponent (d) from the inverse of the public exponent (e) modulus PHI, and where PHI is (p-1)*(q-1). If we can determine p and q, we can compute PHI. Anything below a 738-bit public modulus is relatively inexpensive to crack for RSA. To crack 2K RSA at the current time, we would need the energy to boil ever ocean on the planet to break it. RSA requires padding is required for security. A popular method has been PCKS#1v1.5 — but this is not provably secure and is susceptible to Bleichenbacher’s attack. An improved method is Optimal Asymmetric Encryption Padding (OAEP) and was defined by Bellare and Rogaway and standardized in PKCS#1 v2. The main entity contained in a digital certificate is the public key of a named entity. This is either an RSA or an Elliptic Curve key. A digital certificate is signed with the private key of a trusted entity — Trent. The public key of Trent is then used to prove the integrity and trust of the associated public key. For an elliptic curve of y²=x³+ax+b (mod p), not every (x,y) point is possible. The total number of points is defined as the order (n). ECC (Elliptic Curve Cryptography) was invented by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms did not take off until 2004. In ECC, the public key is a point on the elliptic curve. For secp256k1, we have a 256-bit private key and a 512-bit (x,y) point for the public key. A “04” in the public key is an uncompressed public key, and “02” and “03” are compressed versions with only the x-co-ordinate and whether the y coordinate is odd or even. Satoshi selected the secp256k1 curve for Bitcoin, and which gives the equivalent of 128-bit security. The secp256k1 curve uses the mapping of y²=x³ + 7 (mod p), and is known as a Short Weierstrass (“Vier-strass”) curve. The prime number used with secp256k1 is 2²⁵⁶-2³²-2⁹-2⁸-2⁷-2⁶-2⁴-1. An uncompressed secp256k1 public key has 512 bits and is an (x,y) point on the curve. The point starts with a “04”. A compressed secp256k1 public key only stores the x-co-ordinate value and whether the y coordinate is odd or even. It starts with a “02” if the y-co-ordinate is even; otherwise, it starts with a “03”. In computing the public key in ECC of a.G, we use the Montgomery multiplication method and which was created by Peter Montgomery in 1985, in a paper entitled, “Modular Multiplication without Trial Division.” Elliptic Curve methods use two basic operations: point address (P+Q) and point doubling (2.P). These can be combined to provide the scalar operation of a.G. In 1999, Don Johnson Alfred Menezes published a classic paper on “The Elliptic Curve Digital Signature Algorithm (ECDSA)”. It was based on the DSA (Digital Signature Algorithm) — created by David W. Kravitz in a patent which was assigned to the US. ECDSA is a digital signature method and requires a random nonce value (k), and which should never be reused or repeated. ECDSA is an elliptic curve conversion of the DSA signature method. Digital signatures are defined in FIPS (Federal Information Processing Standard) 186–5. NIST approved the Rijndael method (led by Joan Daemen and Vincent Rijmen) for Advanced Encryption Standard (AES). Other contenders included Serpent (led by Ross Anderson), TwoFish (led by Bruce Schneier), MARS (led by IBM), and RC6 (led by Ron Rivest). ChaCha20 is a stream cipher that is based on Salsa20 and developed by Daniel J. Bernstein. MD5 has a 128-bit hash, SHA-1 has 160 bits and SHA-256 has 256-bits. It is relatively easy to create a hash collision with MD5. Google showed that it was possible to create a signature collision for a document with SHA-1. It is highly unlikely to get a hash collision for SHA-256. In 2015, NIST defined SHA-3 as a standard, and which was built on the Keccak hashing family — and which used a different method to SHA-2. The Keccak hash family uses a sponge function and was created by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche and standardized by NIST in August 2015 as SHA-3. Hash functions such as MD5, SHA-1 and SHA-256 have a fixed hash length, whereas an eXtendable-Output Function (XOF) produces a bit string that can be of any length. Examples are SHAKE128, SHAKE256, BLAKE2XB and BLAKE2XS. BLAKE 3 is the fastest cryptographically secure hashing method and was created by Jack O’Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O’Hearn. Hashing methods can be slowed down with a number of rounds. These slower hashing methods include Bcrypt, PBKDF2 and scrypt. Argon 2 uses methods to try and break GPU cracking, such as using a given amount of memory and defining the CPU utlization. To speed up the operation of the SHA-3 hash, the team reduced the security of the method and reduce the number of rounds. The result is the 12 Kangaroo's hashing method. The number of rounds was reduced from 24 to 12 (with a security level of around 128 bits). Integrated Encryption Scheme (IES) is a hybrid encryption scheme which allows Alice to get Bob’s public key and then generate an encryption key based on this public key, and she will use her private key to recover the symmetric. With ECIES, we use elliptic curve methods for the public key part. A MAC (Message Authentication Code) uses a symmetric key to sign a hash, and where Bob and Alice share the same secret key. The most popular method is HMAC (hash-based message authentication code). The AES block cipher can be converted into a stream cipher using modes such as GCM (Galois Counter Mode) and CCM (counter with cipher block chaining message authentication code; counter with CBC-MAC). A MAC is added to a symmetric key method in order to stop the ciphertext from being attacked by flipping bits. GCM does not have a MAC, and is thus susceptible to this attack. CCM is more secure, as it contains a MAC. With symmetric key encryption, we must remove the encryption keys in the reverse order they were applied. Commutative encryption overcomes this by allowing the keys to be removed in any order. It is estimated that Bitcoin miners consume 17.05 GW of electrical power per day and 149.46 TWh per year. A KDF (Key Derivation Function) is used to convert a passphrase or secret into an encryption key. The most popular methods are HKDF, PBKDF2 and Bcrypt. RSA, ECC and Discrete Log methods will all be cracked by quantum computers using Shor’s algorithm Lattice methods represent bit values as polynomial values, such as 1001 is x³+1 as a polynomial. Taher Elgamal — the sole inventor of the ElGamal encryption method — and Paul Koche were the creators of SSL, and developed it for the Netscape browser. David Chaum is considered as a founder of electronic payments and, in 1983, created ECASH, along with publishing a paper on “Blind signatures for untraceable payments”. Satoshi Nakamoto worked with Hal Finney on the first versions of Bitcoin, and which were created for a Microsoft Windows environment. Blockchains can either be permissioned (requiring rights to access the blockchain) or permissionless (open to anyone to use). Bitcoin and Ethereum are the two most popular permissionless blockchains, and Hyperledger is the most popular permissioned ledger. In 1992, Eric Hughes, Timothy May, and John Gilmore set up the cypherpunk movement and defined, “We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.” In Bitcoin and Ethereum, a private key (x) is converted to a public key with x.G, and where G is the base point on the secp256k1 curve. Ethereum was first conceived in 2013 by Vitalik Buterin, Gavin Wood, Charles Hoskinson, Anthony Di Iorio and Joseph Lubin. It introduced smaller blocks, improved proof of work, and smart contracts. NI-ZKPs involves a prover (Peggy), a verifier (Victor) and a witness (Wendy) and were first defined by Manuel Blum, Paul Feldman, and Silvio Micali in their paper entitled “Non-interactive zero-knowledge and its applications”. Popular ZKP methods include ZK-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) and ZK-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge). Bitcoin and Ethereum are pseudo-anonymised, and where the sender and recipient of a transaction, and its value, can be traced. Privacy coins enable anonymous transactions. These include Zcash and Monero. In 1992, David Chaum and Torben Pryds Pedersen published “Wallet databases with observers,” and outlined a method of shielding the details of a monetary transaction. In 1992, Adi Shamir (the “S” in RSA) published a paper on “How to share a secret” in the Communications of the ACM. This supported the splitting of a secret into a number of shares (n) and where a threshold value (t) could be defined for the minimum number of shares that need to be brought back together to reveal the secret. These are known as Shamir Secret Shares (SSS). In 1991, Torbin P Pedersen published a paper entitled “Non-interactive and information-theoretic secure verifiable secret sharing” — and which is now known as Pedersen Commitment. This is where we produce our commitment and then show the message that matches the commitment. Distributed Key Generation (DKG) methods allow a private key to be shared by a number of trusted nodes. These nodes can then sign for a part of the ECDSA signature by producing a partial signature with these shares of the key. Not all blockchains use ECDSA. The IOTA blockchain uses the EdDSA signature, and which uses Curve 25519. This is a more lightweight signature version and has better support for signature aggregation. It uses Twisted Edwards Curves. The core signing method used in EdDSA is based on the Schnorr signature scheme and which was created by Claus Schnorr in 1989. This was patented as a “Method for identifying subscribers and for generating and verifying electronic signatures in a data exchange system”. The patent ran out in 2008. Curve 25519 uses the prime number of 2²⁵⁵-19 and was created by Daniel J. Bernstein. Peter Shor defined that elliptic curve methods can be broken with quantum computers. To overcome the cracking of the ECDSA signature from quantum computers, NIST are standardising a number of methods. At present, this focuses on CRYSTALS-Dilithium, and which is a lattice cryptography method. Bulletproofs were created in 2017 by Stanford’s Applied Cryptography Group (ACG). They define a zero-knowledge proof as where a value can be checked to see it lies within a given range. The name “bulletproofs” is defined as they are short, like a bullet, and with bulletproof security assumptions. Homomorphic encryption methods allow for the processing of encrypted values using arithmetic operations. A public key is used to encrypt the data, and which can then be processed using an arithmetic circuit on the encrypted data. The owner of the associated private key can then decrypt the result. Some traditional public key methods enable partial homomorphic encryption. RSA and ElGamal allow for multiplication and division, whilst Pailier allows for homomorphic addition and subtraction. Full homomorphic encryption (FHE) supports all of the arithmetic operations and includes Fan-Vercauteren (FV) and BFV (Brakerski/Fan-Vercauteren) for integer operations and HEAAN (Homomorphic Encryption for Arithmetic of Approximate Numbers) for floating point operations. Most of the Full Homomorphic encryption methods use lattice cryptography. Some blockchain applications use Barreto-Lynn-Scott (BLS) curves which are pairing-friendly. They can be used to implement Bilinear groups and which are a triplet of groups (G1, G2 and GT), so that we can implement a function e() such that e(g1^x,g2^y)=gT^{xy}. Pairing-based cryptography is used in ZKPs. The main BLS curves used are BLS12–381, BLS12–446, BLS12–455, BLS12–638 and BLS24–477. An accumulator can be used for zero-knowledge proof of knowledge, such as using a BLS curve to create to add and remove proof of knowledge. Metamask is one of the most widely used blockchain wallets and can integrate into many blockchains. Most wallets generate the seed from the operating system and where the browser can use the Crypto.getRandomValues function, and compatible with most browsers. With a Verifiable Delay Function (VDF), we can prove that a given amount of work has been done by a prover (Peggy). A verifier (Victor) can then send the prover a proof value and compute a result which verifies the work has been done, with the verifier not needing to do the work but can still prove the work has been done. A Physical Unclonable Functions (PUFs) is a one-way function which creates a unique signature pattern based on the inherent delays within the wires and transistors. This can be used to link a device to an NFT.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27779289
info_outline
Talking with Tech Leaders: A Chat with Michael Phair (Be-IT)
08/16/2023
Talking with Tech Leaders: A Chat with Michael Phair (Be-IT)
Interview here:
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27767439
info_outline
Bill Buchanan - A Vision for the NHS: A Citizen Wallet
08/15/2023
Bill Buchanan - A Vision for the NHS: A Citizen Wallet
Your organisation needs a vision. Without it, you will never be great. You will never advance. You will keep doing the same old things and without any real purpose. A vision gives you a purpose and a focus. But, it needs to have a plan which takes you there. But, without it, how can you ever plan? For any great organisation, you start with a vision. So, what about a vision for the NHS? I appreciate that I am only a technologist, but I am also a citizen, and I care about the health and well-being of my fellow citizens. I also don’t like bureaucracy and inefficiencies — and I strive in my working life to overcome these. So, our work has generally focused on improving the citizen’s viewpoint of health care. And, so, I am honoured to present at Digital Scotland 2023 this year, and on a topic which has been our passion for over a decade — a citizen-focused health care system: I have attended conferences in Scotland and which talk about “citizen-focused health care”, and the audience all go away inspired and ready to build new digital worlds for citizens. But nothing happens, and then we repeat the next year again. Well, this year, I will show that a vision can be created as a reality. With digital wallets, the technology is all in place, and there are no great barriers to overcome, any more. During the COVID-19 time, there was some hope for digital advancedment and where we saw the use of digital passports — but we have failed to build of these, and have ended up with little in the way of digital engagement between the NHS and the citizen. So here’s the problem and my vision — and it’s quite simple. The Problem I have interacted with the NHS for several decades — luckily, I have never had any medical ailments, but have observed it in relation to others. I have seen some truly shocking practices in dealing with patient records -including for someone in my family have “Do not resuscitate” written on their records without any discussion with the family, and where a physical filing cabinet of patient records had to be moved by taxi from one hospital in Edinburgh to another one. Overall, there is often great resistance to change and to the adoption of digital methods. The Connecting for Health programme — which cost over £15 billion — had to be eventually cancelled, as it delivered nothing. Why? Well, one reason is, “Won’t this replace my existing job of writing down the details?”, “Yes, but you can do and do something even better with your time”, “But, this is what I was trained for. Anyway, I don’t trust computers, anyway!” And, so, recently, I went to register for a GP and was handed a piece of paper and told to find a pen and write down all my details. In virtually every interaction with the NHS, I have had to do this, and perhaps, one day, I will have all my medical details stored on a digital wallet on my phone, and where the GP just scans them in. Once I filled in it, it then went into a black hole — and where I hoped that a human would eventually make sense of my scribbles. To date, I have yet to receive any confirmation that I have been registered, and I have no on-line place to check my details. The NHS can hardly get to first base in creating a proper online world for my data. It fleetingly sends me the odd email or SMS message, but it still sits behind a high wall. Overall, in places, it feels like there are still parts of our lives that are stuck in the 20th Century. The Vision Let me dream now. One day, I will register for a new GP. I will walk in, and the receptionist will ask me to register. I will press a few buttons and generate a QR code. They will scan this in, and an instant message will appear to say that I am now registered and say that all my details have been registered, “Ah, Nice to meet you, Bill. I see it is your birthday today. Please can you check your details are okay and consent to its storage?”. “All looks good”. “How would you like us to contact you, by email or SMS?”, “Email. Please. Here is the QR code for my email address, and you can scan it”. When I go to see the GP, they ask me for my weight and height, and, again, I go to my wallet and generate a QR code and where the GP scans it in and says, “That’s great. All is okay. But your BMI is a little high, and a little up from the last time we meet. I will store this in your record, and we can keep a track on it. I will email you some recommendations for your diet”. How long for this to happen? How long will it take for this to ever happen? Well, it’s actually not that difficult. We are part of an EU project which has developed the GLASS wallet and which puts the control back in the citizen’s hands: So, why not come and see it live on 12 Sept in London []: And in November []: Conclusions We have a dream for an improved healthcare system — and we have had this dream for over a decade. The technology is all in place — we now need to use it and put the power back into the hands of those who really matter — the citizen!
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27757914
info_outline
Bill Buchanan - Let’s Talk About Spreadsheets
08/15/2023
Bill Buchanan - Let’s Talk About Spreadsheets
I remember attending a talk many years ago, and the presenter said, “I’ve got this amazing tool called Lotus 123”, and he gave a practical demo of doing some calculations. People in the audience were stunned by the simplicity of its operation. It was the birth of the thing that drives many businesses … spreadsheets. They are just so simple to use, and we all love them. And so, in the PSNI (Police Service of Northern Ireland) data breach, it is a simple Excel spreadsheet that is being pin-pointed as the carrier of highly-sensitive information. Overall, in the breach, there were four major failings: A lack of training and awareness from those handling the FoI request. A lack of checking and sign-off within the process. Documents should be marked with the security classification, and access rights defined properly to highly confidential documents. The use of spreadsheets to store sensitive data. I hope that the first two are quite obvious in mitigating … send staff on cybersecurity courses, and improve your sign-off procedures. Now, let’s turn on the mighty Microsoft Excel. So, what’s wrong with spreadsheets? Well, they are NOT DATABASES and should not be used as a database. I’ve done quite a few code reviews and am always shocked by the number of back-end databases that use Microsoft Excel. Basically, Excel is a basic computing engine that is optimized for small problems and not for those that a database can cope with. But, the main weakness is that they have virtually no inbuilt security and should not be used for sensitive data. Unfortunately, Microsoft has never really properly integrated security into Excel, and even encrypted documents are flawed in their operation. The cyber-aware world has moved on from spreadsheets, and in many organisations, we see SAS (Software as a Service), which restricts access to data. Only those with the rights to access key elements of the data can get access to it. HR systems, too, are carefully guarded in cloud-based systems. In fact, moving your data into the public cloud really gives you an excellent viewpoint on how to protect sensitive data. I’ve seen some excellent data protection teams operating in banks, and much of their work is driven by automated software. I appreciate that data sometimes needs to be exported into a spreadsheet, but if it does, it should be encrypted in its form and not rely on the operating system to do this. Perhaps law enforcement — in places — is a decade behind the finance industry in setting up SOCs (Security Operations Centres), and where a well-run security infrastructure would be continually scanning for sensitive documents. Data protecting procedures have been implemented in many finance companies for years, and where scanners pick up documents that are stored in places they shouldn’t be. Network scanners, too, can pin-point sensitive documents within the infrastructure, and also when sent outside the network. Any document that leaves an organisation such as the police should, at least, be triaged, no matter if it is for email or Web. The detection of telephone numbers, personal names and addresses in a document is fairly trival with the usage of regular expressions. An alert should have gone up with the loading of a file with so many personal details. Conclusions Policing needs to learn from this data breach. They need to increase awareness and implement training, along with better sign-off procedures. But, basically, the need to catch up with the rest of the world and implement proper safeguards on sensitive information. The days of marking a document as “confidential” are gone — we need better data handling, and spreadsheets are typically not part of this for highly sensitive information. I believe that the police and other government agencies can learn a great deal from the finance industry on cybersecurity practices. They are the most attacked sector, but have one of the lowest amounts of data breaches.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27757362
info_outline
Bill Buchanan - A Bluffer’s Guide to Blockchain: 100 Knowledge Snippets
08/13/2023
Bill Buchanan - A Bluffer’s Guide to Blockchain: 100 Knowledge Snippets
So, here’s my Top 100 snippets of knowledge for blockchain: Blockchains use public key methods to integrate digital trust. Bob signs for a transaction with his private key, and Alice proves this with Bob's public key. The first usable public key method was RSA — and created by Rivest, Shamir and Adleman. It was first published in 1979 and defined in the RSA patent entitled “Cryptographic Communications System and Method”. Blockchains can either be permissioned (requiring rights to access the blockchain) or permissionless (open to anyone to use). Bitcoin and Ethereum are the two most popular permissionless blockchains, and Hyperledger is the most popular permissioned ledger. Ralph Merkle — the boy genius — submitted a patent on 5 Sept 1979 and which outlined the Merkle hash. This is used to create a block hash. Ralph Merkle’s PhD supervisor was Martin Hellman (famous as the co-creator of the Diffie-Hellman method). David Chaum is considered as founders of electronic payments, and, in 1983, created ECASH, along with publishing a paper on “Blind signatures for untraceable payments”. Miners gather transactions on a regular basis, and these are added to a block and where each block has a Merkle hash. The first block on a blockchain does not have any previous blocks — and is named the genesis block. Blocks are bound in a chain, and where the previous, current and next block hashes are bound into the block. This makes the transactions in the block immutable. Satoshi Nakamoto worked with Hal Finney on the first versions of Bitcoin, and which were created for a Microsoft Windows environment. Craig Steven Wright has claimed that he is Satoshi Nakamoto, but this claim has never been verified. Most blockchains use elliptic curve cryptography — a method which was created independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms did not take off until 2004. Satoshi selected the secp256k1 curve for Bitcoin, and which gives the equivalent of 128-bit security. The secp256k1 curve uses the mapping of y²=x³ + 7 (mod p), and is known as a Short Weierstrass (“Vier-strass”) curve. The prime number used with secp256k1 is ²²⁵⁶−²³²−²⁹−²⁸−²⁷−²⁶−²⁴−1. Satoshi published a 9-page paper entitled “Bitcoin: A Peer-to-Peer Electronic Cash System” White Paper on 31 Oct 31, 2008. In 1997, Adam Black introduce the concept of Proof of Work of Hashcash in a paper entitled, “Hashcash — a denial of service countermeasure.” This work was used by Satoshi in his whitepaper. Satoshi focused on: a decentralized system, and a consensus model and addressed areas of double-spend, Sybil attacks and Eve-in-the-middle. The Sybil attack is where an adversary can take over the general consensus of a network — and leads to a 51% attack, and where the adversary manages to control 51% or more of the consensus infrastructure. Satoshi used UK spelling in his correspondence, such as using the spelling of “honour”. The first Bitcoin block was minted on 3 Jan 2009 and contained a message of “Chancellor on brink of second bailout for banks” (the headline from The Times, as published in London on that day). On 12 Jan 2009, Satoshi sent the first Bitcoin transaction of 50 BTC to Hal Finney []. A new block is created every 7–10 minutes on Bitcoin. In Aug 2023, the total Bitcoin blockchain size is 502 GB. As of Aug 2023, the top three cryptocurrencies are Bitcoin, Ether, and Tether. Bitcoin has a capitalization of $512 billion, Ether with $222 billion, and Tether at $83 billion. The total cryptocurrency capitalisation is $1.17 trillion. The original block size was 1MB for Bitcoin, but recently upgraded to support a 1.5MB block — and has around 3,000 transactions. Currently the block sizes are more than 1.7MB. Bitcoin uses a gossip protocol — named the Lightning Protocol — to propagate transactions. A Bitcoin wallet is created from a random seed value. This seed value is then used to create the 256-bit secp256k1 private key. A wallet seed can be converted into a mnemonic format using BIP39, and which uses 12 common words. This is a deterministic key, and which allows the regeneration of the original key in the correct form. BIP39 allows for the conversion of the key to a number of languages, including English, French and Italian. A private key in a wallet is stored in a Wif format, and which is a Base58 version of the 256-bit private key. The main source code for the Bitcoin blockchain is held at , and is known as Bitcoin core. This is used to create nodes, store coins, and transactions with other nodes on the Bitcoin network. A 256-bit private key has 115,792 billion billion billion billion billion billion billion billion different keys. A public Bitcoin ID uses Base58 and has a limited character set of ‘123456789ABCDEFGHJKLMN PQRSTUVWXYZabcdefghijkmno pqrstuvwxyz’, where we delete ‘0’ (zero), ‘l’ (lowercase ‘l’), and ‘I’ (capital I) — as this can be interpreted as another character. In Bitcoin and Ethereum, a private key (x) is converted to a public key with x.G, and where G is the base point on the secp256k1 curve. An uncompressed secp256k1 public key has 512 bits and is an (x,y) point on the curve. The point starts with a “04”. A compressed secp256k1 public key only stores the x-co-ordinate value and whether the y coordinate is odd or even. It starts with a “02” if the y-co-ordinate is even, otherwise it starts with a “03”. In 1992, Eric Hughes, Timothy May, and John Gilmore set up the cypherpunk movement and defined, “We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.” In Ethereum, the public key is used as the identity of a user (a.G), and is defined as a hexademical value. In Bitcoin, the public ID is created from a SHA256 hash of the public key, and then a RIPEMD160 of this, and then covered to Base58. In computing the public key in ECC of a.G, we use the Montgomery multiplication method and which was created by Peter Montgomery in 1985, in a paper entitled, “Modular Multiplication without Trial Division.” Elliptic Curve methods use two basic operations: point address (P+G) and point doubling (2.P). These can be combined to provide the scalar operation of a.G. In 1999, Don Johnson Alfred Menezes published a classic paper on “The Elliptic Curve Digital Signature Algorithm (ECDSA)”. It was based on the DSA (Digital Signature Algorithm) — created by David W. Kravitz in a patent which was assigned to the US. The core signature used in Bitcoin and Ethereum is ECDSA (Elliptic Curve Digital Signature Algorithm), and which uses a random nonce for each signature. The nonce value should never repeat or be revealed. Ethereum was first conceived in 2013 by Vitalik Buterin, Gavin Wood, Charles Hoskinson, Anthony Di Iorio and Joseph Lubin. It introduced smaller blocks, an improved proof of work, and smart contracts. Bitcoin is seen as a first-generation blockchain, and Ethereum as a second-generation. These have been followed by third-generation blockchains, such as IOTA, Cardano and Polkadot — and which have improved consensus mechanisms. Bitcoin uses a consensus mechanism which is based on Proof-of-Work, and where miners focus on finding a block hash that has a number of leading “0”s. The difficulty of the mining is defined by the hashing rate. At the current time, this is around 424 million TH/s. There are around 733,000 unique Bitcoin addresses being used. Satoshi defined a reward to miners for finding the required hash. This was initially set at 50 BTC, but was set to half at regular intervals. On 11 January 2021, it dropped from 12.5 BTC to 6.2 BTC. Bitcoin currently consumes around 16.27 GWatts of power each year to produce a consensus — equivalent to the power consumed by a small country. In creating bitcoins, Satoshi created a P2PKH (Pay to Public Key Hash) address. These addresses are used to identify the wallet to be paid and links to the public key of the owner. These addresses start with a ‘1’. In order to support the sending of bitcoins to and from multiple addresses, Bitcoin was upgraded with SegWit (defined in BIP141). The wallet address then integrates the pay-to-witness public key hash (Pay to script hash — P2SH). These addresses start with a ‘3’. Ethereum uses miners to undertake work for changing a state and running a smart contract. They are paid in “gas” or Ether and which relates to the amount of computation conducted. This limits denial of service attacks on the network and focuses developers on creating efficient code. Ethereum supports the creation of cryptocurrency assets with ERC20 tokens — and which are FT (Fungible Tokens). For normal crypto tokens (ERC-20) we use, there is a finite number of these, and each of these is the same. Ethereum creates NFTs (Non-Fungible Tokens) with ERC721 tokens. We mint these each time and each is unique. Solidity is the programming language used in Ethereum, while Hyperledger can use Golang, Node.js and Java. For Ethereum, we compile Solidity code into EVM (Ethereum Virtual Machine) code. This is executed on the blockchain. Blockchain uses the SHA-256 hash for transaction integrity. Ethereum uses the Keccak hash is used to define the integrity of a transaction. This is based on SHA-3, and differs slightly from Keccak. The Keccak hash family uses a sponge function and was created by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche, and standardized by NIST in August 2015 as SHA-3. The DAO is a decentralized autonomous organization (DAO) for the Ethereum blockchain and was launched in 2016. In 2016, DAO raised $150 million through a token sale but was hacked and funds were stolen. This resulted in a forking of the blockchain: Ethereum and Ethereum Classic. Non-interactive Zero Knowledge Proofs (NI-ZKP) allow an entity to prove that they have knowledge of something — without revealing it. A typical secret is the ownership of a private key. NI-ZKPs involve a prover (Peggy), a verifier (Victor) and a witness (Wendy) and were first defined by Manuel Blum, Paul Feldman, and Silvio Micali in their paper entitled, “Non-interactive zero-knowledge and its applications”. Popular ZKP methods include ZK-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) and ZK-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge). Bitcoin and Ethereum are pseudo-anonymised, and where the sender and recipient of a transaction, and its value, can be traced. Privacy coins enable anonymous transactions. These include Zcash and Monero. In 1992, David Chaum and Torben Pryds Pedersen published “Wallet databases with observers,” and outlined a method of shielding the details of a monetary transaction. In 1992, Adi Shamir (the “S” in RSA) published a paper on “How to share a secret” in the Communications of the ACM. This supported the splitting of a secret into a number of shares (n) and where a threshold value (t) could be defined for the minimum number of shares that need to be brought back together to reveal the secret. These are known as Shamir Secret Shares (SSS). In 1991, Torbin P Pedersen published a paper entitled “Non-interactive and information-theoretic secure verifiable secret sharing” — and which is now known as Pedersen Commitment. This is where we produce our commitment and then show the message that matches the commitment. Distributed Key Generation (DKG) methods allow a private key to be shared by a number of trusted nodes. These nodes can then sign for a part of the ECDSA signature by producing a partial signature with these shares of the key. Not all blockchains use ECDSA. The IOTA blockchain uses the EdDSA signature, and which uses Curve 25519. This is a more lightweight signature version, and has better support for signature aggregation. It uses Twisted Edwards Curves. The core signing method used in EdDSA is based on the Schnorr signature scheme and which was created by Claus Schnorr in 1989. This was patented as, a “Method for identifying subscribers and for generating and verifying electronic signatures in a data exchange system”. The patent ran out in 2008. Curve 25519 uses the prime number of ²²⁵⁵-19 and was created by Daniel J. Bernstein. Peter Shor defined that elliptic curve methods can be broken with quantum computers. To overcome the cracking of the ECDSA signature from quantum computers, NIST are standardising a number of methods. At present, this focuses on CRYSTALS-Dilithium, and which is a lattice cryptography method. Bulletproofs were created in 2017 by Stanford’s Applied Cryptography Group (ACG). They define a zero-knowledge proof as where a value can be checked to see it lies within a given range. The name of “bulletproofs” is defined as they are short, like a bullet, and with bulletproof security assumptions. While Bitcoin can take up to 7–10 minutes to mine a new block and create a consensus, newer blockchains, such as IOTA, can give an almost instantaneous consensus. Banks around the world are investigating CBDC (Central Bank Digital Currency) and which is not a cryptocurrency but a way to quickly define a consensus on a transaction. Homomorphic encryption methods allow for the processing of encrypted values using arithmetic operations. A public key is used to encrypt the data, and which can then be processed using an arithmetic circuit on the encrypted data. The owner of the associated private key can then decrypt the result. Some traditional public key methods enable partial homomorphic encryption. RSA and ElGamal allow for multiplication and division, whilst Pailier allows for homomorphic addition and subtraction. Full homomorphic encryption (FHE) supports all of the arithmetic operations and includes Fan-Vercauteren (FV) and BFV (Brakerski/Fan-Vercauteren) for integer operations and HEAAN (Homomorphic Encryption for Arithmetic of Approximate Numbers) for floating point operations. Most of the Full Homomorphic encryption methods use lattice cryptography. Some blockchain applications use Barreto-Lynn-Scott (BLS) curves which are pairing friendly. They can be used to implement Bilinear groups and which are a triplet of groups (G1, G2 and GT), so that we can implement a function e() such that e(g1^x,g2^y)=gT^{xy}. Pairing-based cryptography is used in ZKPs. The main BLS curves used are BLS12–381, BLS12–446, BLS12–455, BLS12–638 and BLS24–477. An accumulator can be used for zero-knowledge proof of knowledge, such as using a BLS curve to create to add and remove proof of knowledge. Open Zeppelin is an open-source Solidity library that supports a wide range of functions that integrate into smart contracts in Ethereum. This includes AES encryption, Base64 integration and Elliptic Curve operations. Metamask is one of the most widely used blockchain wallets and can integrate into many blockchains. Most wallets generate the seed from the operating system and where the browser can use the Crypto.getRandomValues function, and compatible with most browsers. Solidity programs can be compiled with Remix at remix.ethereum.org. The main Ethereum network is Ethereum Mainnet. We can test smart contracts on Ethereum test networks. Current networks include sepolia.etherscan.io and goerli.net. Ether can be mined for test applications from a faucet, such as faucet.metamask.io. This normally requires some proof of work to gain the Ether — in order to protect against a Denial of Service against the Faucet. The private key can be revealed from two ECDSA signatures which use the same random nonce value. Polkadot is a blockchain which allows blockchains to exchange messages and perform transactions. The proof of work method of creating is now not preference because of the energy that it typically uses. Many systems now focus on proof of stack (PoS). A time-lock puzzle/Proof of Work involves performing a computing task which has a given cost and which cannot be cheated again. This typically involves continual hashing or continual squaring. The Chia blockchain network uses both Proof of Space (PoS) and Proof of Time (PoT). The PoS method makes use of the under-allocation of hard-disk space. With a Verifiable Delay Function (VDF), we can prove that a given amount of work has been done by a prover (Peggy). A verifier (Victor) can then send the prover a proof value and compute a result which verifies the work has been done, with the verifier not needing to do the work but can still prove the work has been done. A Physical Unclonable Functions (PUFs) is a one-way function which creates a unique signature pattern based on the inherent delays within the wireless and transistors. This can be used to link a device to an NFT. In Blockchain applications, we can use Non-interactive zero-knowledge (NIZK) proofs for the equality (EQ) of discrete logarithms (DL) — DLEQ. With this — in discrete logarithms — we have 𝑎=𝑔^x and 𝑏=ℎ^x and can prove the knowledge of x by showing that log_𝑔(𝑎)=log_ℎ(b).
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27738252
info_outline
Bill Buchanan - Dead Man’s PLC (DM-PLC)
08/13/2023
Bill Buchanan - Dead Man’s PLC (DM-PLC)
Blog: . You can just imagine the movie trailer … “Your worst enemy has taken over all your flights, and you cannot remove them from your network. They demand a $1 billion ransom, or else they will bring every flight down. Bob accidentally removes one of the controllers — you now only have 25 minutes to save the lives of those in the air!” We have all seen movies with a dead man switch — and where an elaborate mechanism is created for someone to be killed if a random is not paid. But, anyone who tampers with the mechanism will cause the dead man switch to activate and kill the target. Now, this approach is coming to attacks on CNI (Critical National Infrastructure) and industry control systems (ICS). We have generally been fortunate that PLC (Programmable Logic Control) systems have been largely untouched by cyberattacks. But that is no reason to not focus on their security. Significant risks exist, especially for attacks against CNI — as highlighted with Stuxnet. In a new paper, Richard Derbyshire and a research team at Orange Cyberdefence [] and Lancaster University focus on the scenario where an entire environment is controlled by an adversary and where all of the assets poll each other to make sure they remain untampered. Any changes to the configuration or a removal of any of the controllers will cause the system to go “Full ON” — and is similar to a Dead Man’s switch [1][] The paper outlines the increase in cyber extortion (Cy-X) tactics and where a key focus now is typically to both encrypt the target’s data and exfiltrate their data. In most cases, this type of approach can be defended against in a PLC environment — by replacing existing hardware or resetting the configuration of devices (which is equivalent to a restore from backup). DM-PLC showcases a methodology which will overcome these recovery methods. CrashOverRide and Titon In 2016, the CrashOverRide malware was installed on the Ukrainian critical infrastructure, and which resulted in a cyber attack on the power supply network. It happened on an electrical transmission station near the city of Kiev (Ukrenergo), in December 2016 and resulted in a black-out for around 20% of the Ukraine population. Luckily, it only lasted for one hour, but many think that it was just a test — a dry run — for a more sustained attack. This attack has now been traced to the Crash Override (or Industroyer) malware. A previous attack on the Ukranian power infrastructure in 2015 involved the manual switch off of power to substations, but the newly discovered malware learns the topology of the supply network — by communicating with control equipment within the substations — and automatically shutdown systems. The company who analysed it (Dragos) thinks that it could bring down parts of the energy grid, but not the whole of it, and that the activation date of the malware sample was 17 December 2016. They also defined that the malware can be detected by looking for abnormal network traffic, such as looking for substation locations and probing for electrical switch breakers. Many suspect it may have been sent through phishing emails (as with the 2015 attack), and where Crash Override infected Microsoft Windows machines within the target network and then mapped out control systems in order to locate the key supply points, along with recording network activity which can be sent back to the controllers of the malware. After the discovery phase, it is thought that Crash Override can load up one of four additional modules, and which can communicate with different types of equipment (such as for Honeywell and Siemens systems). This could allow it to target other electrical supply networks within different countries. In 2018, too, it was reported that the Triton malware brought down safety systems for an oil and gas network in the Middle East []. This was achieved by the reverse engineering of the firmware used by device controllers and focused itself on specific parts of the infrastructure. A typical attack can often involve disabling safety systems — and which will protect the infrastructure on a system overload. When an overload does occur, the safety systems do not then protect the equipment, and this can lead to severe physical damage of the infrastructure. A tripping of just one part of the safety system, too, can cause a chain reaction, and bring down a large part of the infrastructure. DM-PLC With DM-PLC, all of the PLCs and engineering workstations (EWs) constantly poll each other and detect any deviations from the required attack behaviour — and thus disallow any changes to the overall running of the adversories objectives. If the system is tampered with, it activates a Dead Man’s switch, and where the PLCs set their outputs to “ON”. This could have a devastating effect on the physical infrastructure that the PLCs connect to. This — the research team say — moves away from the traditional ransomware approach of encrypting data within the infrastructure to one that allows the system to continue, but under the adversary’s command. Figure 1 outlines the basic setup and where the team set up a number of objectives [1]: Deployable with minimal prerequisites from an EW. Runs in parallel to existing operational code. Does not impact existing operational code. Is resilient to tampering/response and recovery processes. Includes tamper detection. Can enact undesirable wide-spread operational impact. Requires a key to relinquish control back to system owners. Can be tested prior to being armed. Figure 1 [1] The main focus of the work is to define a framework for a DM-PLC, and then define mitigation techniques. In order to keep the deadlock, the devices then monitor each other for changes (Figure 2), and where alerts are raised for any perceived changes. Figure 2: Polling of devices Overall, the team successfully tested three main operations [1]: A PLC being removed from the network. The DM-PLC ransom timer expiring. The victim entering a code having ‘paid’ their ransom. In a scenario with three PLCs, Figure 3 shows the response to PLC 3 being removed from the network and where PLC 1 and PLC 2 set their outputs to 1 after 25 seconds — which causes the Dead Man switch to activate. Thus, someone taking PLC 3 off the network has 25 seconds before the whole of the network goes into “full ON” mode. Conclusions Dead Man PLC sounds like a script for a movie, but it is a movie that could play for real. Our CNI is precious, and we need to protect it. Otherwise, here’s another movie … “Your worst enemy has taken over all the fun rides, and you cannot remove them from your network. They demand a $1 billion ransom, or every ride will stop instantly. Bob accidentally removes one of the controllers — you now have 25 minutes to save lives!” References [1] Derbyshire, R., Green, B., van der Walt, C., & Hutchison, D. (2023). Dead Man’s PLC: Towards Viable Cyber Extortion for Operational Technology. arXiv preprint arXiv:2307.09549.
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27737391
info_outline
Bill Buchanan - The 100 Basic Rules of Cryptography (and Secure Programming)
08/10/2023
Bill Buchanan - The 100 Basic Rules of Cryptography (and Secure Programming)
Kerckhoff’s principle defines that “a Cryptographic system should be designed to be secure, even if all its details, except for the key, are publicly known”, but there aren’t too many other rules defined. So here are my 100 Basic Rules of Cryptography (and Secure Programming). First, my Top 10: Cryptography is both an art and a science. Cryptography needs to be both theoretical and practical — one without the other leaves gaps. The maths is not actually that difficult — it is just the way that researchers talk about it that is a problem. Know your knowledge gaps — and plug them. Your university education is unlikely to have properly set you up for the serious world of cryptography. Crypto is cryptography and not cryptocurrency. Few methods are perfect — know the limits of any method you use. Don’t cook your own crypto! How many times do you have to say this to yourself? Security-by-obfuscation never works that well. Confidentiality, Integrity and Assurance are different things and require different methods. Don’t merge them all together into one thing. And the rest: Digital certificates and PKI (Public Key Infrastructure) are two of the least understood areas of cybersecurity — don’t expect many people to understand them. For public key encryption, you encrypt with Alice’s public key, and she decrypts with her private key. For public key signatures, you sign a hash of the message with your private key, and Alice proves your public key. Your baseline hack is always brute force. Know how many dollars it would cost the NSA to crack something. Machine code can reveal your secrets. A hack of one key should not lead to the loss of all the previous keys. A key should only exist for the time it was meant to exist for. Use session keys wherever possible, and watch out for long-term keys. Your role is typically to protect the user and not reveal things to the NSA. Listen to experts, and be a teacher to others. Be open with your knowledge, and don’t pretend you know something that you don’t. Try and understand the basics of the maths involved — otherwise, you are trusting others. Understand entropy, and know how to calculate it and prove it with experiments. Run entropy calculations before pushing related code to production. Don’t use a method unless it has been peer-reviewed and published. Understand the strengths and weaknesses of your methods. No method is perfect —but at least know what problems it might cause and try and mitigate against these. Know why you have chosen X over Y, and be able to defend the reason to others. The maths may be sound, but human is typically the main weakness. Everything will work fine until it doesn’t. Test for out-of-band conditions as much for good conditions. Zero is not your friend in cryptography, so always know what it will do to your system. Don’t just catch exceptions; action them. Do not allow to progress unless everything checks out okay. Log good and bad. Catch good things, along with bad things. Monitor your security logs for exceptions and bad operations. Remove debugging code from your production version. Keep up to date with the latest research. Beware of backdoors in methods and code. Side channels are smart ways to reveal the 1s and 0s, and every bit discovered reduces the security level by two and makes it so much less expensive to crack. Every bit drops the price tag for a crack by a half. The core security of your system is likely to depend on the generation of random oracles (seed values). Make sure they are generated so they do not repeat within a given time and cannot be brute forced by the NSA. If you can use real randomisation and not pseudo-randomness. If you generate pseudo-randomness, take the randomness from several sources. Continually review your code, and get external experts to review it. Don’t push your code in production until you have tested it — fully! Check the code in the libraries you use, and perhaps, don’t use them if there are no open-source repositories. If you can, open source your libraries. Watch out for version updates on your code, and try and lock a given (known) version to your code. Encrypt anything that looks like PII (Personally Identifiable Information) at rest and over the air. Remember that running memory can reveal keys and cryptographic artefacts, so know the risk. Learn a new method every day, and don’t get stuck with the same old crypto! Quantum computers will happen one day and will disrupt our life, so start thinking about the impact they might have. Revealing your private keys is like giving someone the keys to your castle, so know where they are and restrict access to them. Only give access to private keys to those you most trust to use them properly. Air your development environment from your production environment, and don’t let private keys propagate. The best systems use zero trust. No rights to anything unless they can be proven. You will — at times — need to revoke your public keys. Be aware of the processes involved and of the embarrassment it will cause. Educate the board on the importance of good crypto! Encourage your team to undertake academic study, and get them to reserve a few hours a week for independent study of new methods. Read classic research papers, and don’t dismiss methods because they are not currently used. Collaborate with an academic team which has complementary skills to your own team. If you lack theoretical knowledge in your team, get external experts to come in for a chat. Once a private key is destroyed, any data encrypted with it is also likely to be destroyed. Limit the copies of a secret key. If you can, keep your keys in an HSM (Hardware Security Module). Cryptography in the Cloud pushes you to the limits and will often enhance the methods you use. Back up those secret keys, but make sure they are well-wrapped before putting them in a place that others can access. Learn about the garbage collector and how your program deals with data when running. Leave the coding of the maths in papers to experts. Don’t trust auditors to prove the security of your system. Generally, auditors are likely to have little understanding of the methods you will use — get experts to review your methods. Beware of Denial of Service (DoS) on your code — such as continual exception handling. Most systems boil down to one single thing that defines the overall security — know what this is. If you are interested in the X method, go and contact the person that created it — you will often be surprised how open many researchers are in sharing their ideas. Watch those CVE updates like a hawk — it can be a race between you and your adversary. On Cloud-based systems, log everything about your keys. In the Cloud, only allow keys to be used for the purpose they should be used for, and don’t use them generally. Limit access to keys for services and roles. Reduce the impact of a stolen or lost secret key. Never encrypt large-scale data infrastructures with the same secret key — try to use envelope encryption. Passwords are dead — replace with cryptographic signing methods. Your enemy can be within. Watch out for key stealing and deletion. Tell your incident response team about the difference between losing encrypted data and non-encrypted data. Scan your network for secret keys placed in locations where they should not be stored. Know what the acronyms mean and not just what they stand for. Know why we use message authentication codes (MACs). If you need to generate an encryption key from a password or secret, pick a good KDF (Key Derivation Function), and know the cost to crack it. Rainbow tables aren’t much of a threat anymore, so make use your use a good salt value. Everything below 72 bits of entropy is likely to be crackable by the NSA — unless a slow method is used for the processing with the key. Nonce/salt values should start at 96 bits. Never use anything less. RC4 has been cracked … get over it. Stream ciphers can often be broken — make sure you never reuse your salt value. DES and 3DES are mainly uncracked, but DES only uses a 56-bit key, so never use it Compared with ECC, RSA requires a good deal more processing and has larger key sizes, but it is still great. ECDSA suffers from nonce reuse attacks. For digital signatures, you should delete the nonce value after the signature has been created, but for symmetric key and hashing, you need to store it. In ECC, the public key is a point on the elliptic curve. For secp256k1, we have a 256-bit private key and a 512-bit (x,y) point for the public key. A “04” in the public key is an uncompressed public key, and “02” and “03” are compressed versions with only the x-co-ordinate and whether the y coordinate is odd or even. Consider writing papers — it is a great way to develop your writing and abstraction skills. Don’t sit back with the status quo — try to continually improve privacy and security. Have a risk register and maintain it. Don’t be shy, and don’t hide things. Have someone to check your code — on a regular basis. Remember Moore’s Law … computing power is increasing every year, so know that something that is safe now might not be in 10 years. Know how long something needs to be kept secure and secure for that age (and a lot more). And there you go … 100 rules of cryptography
/episode/index/show/81d23e03-b5be-467e-8de4-c6fa7cccdb51/id/27711531