< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[3.0] Contemporary Cryptology

v1.1.0 / chapter 3 of 4 / 01 may 23 / greg goebel

* In the 21st century, cryptological technology is in wide and expanding use, particularly thanks to the internet. This chapter surveys modern cryptotech.

MODERN CRYPTOLOGY


[3.1] MESSAGE AUTHENTICATION / DIGITAL SIGNATURES
[3.2] SSL / VPN / WPA
[3.3] PASSWORDS / MFA / SMART CARDS / FILTERS
[3.4] AZURE SPHERE
[3.5] MISCELLANEOUS CRYPTOTECH

[3.1] MESSAGE AUTHENTICATION / DIGITAL SIGNATURES

* The Diffie-Hellman algorithm and public-key cryptography were big steps forward in providing security for the emerging internet, but a few other pieces were required as well. One of the important elements of a useful cryptosystem is "message authentication", or ensuring that Bob can receive a message from Alice and actually be certain it is from Alice. Related to the concept of message authentication are the other concepts of "data integrity", or ensuring that the data in the message has not been tampered with, or corrupted by a transmission error; and "non-repudiation", or ensuring that if Alice sends a contract to Bob, she can't later claim that it was a fabrication and disown it.

Diffie and Hellman devised an elegant little trick that allows public key cryptography to authenticate a message. This trick is based on the fact that RSA and other public key ciphers have a mirror symmetry. Normally, in RSA, a message is encrypted with a public key and can only be decrypted with the corresponding private key. Anyone can encrypt a message with the public key, but only one person can decrypt it using the secret private key. The flow of the encryption is "many to one".

It turns out that the reverse is also true: a message can be encrypted with a private key, and can only be decrypted with the corresponding public key. This sort of "private-key encryption", might seem silly, since then anybody can use the public key to read the message: the flow of the encryption is "one to many". The trick is that the public key can only decrypt messages encrypted with the secret private key.

Suppose Alice comes to an agreement with Bob, and wants to provide a document trail. Alice can perform private-key encryption on the agreement and send it to Bob. Bob then uses her public key to decrypt the message. Since Alice's public key will only decrypt a message that was encrypted with her unknown private key, as long as Bob is certain he has Alice's proper public key, he is just as certain that the message came from Alice.

What happens, however, if Bob later tampers with the message Alice sent him to cheat her on the agreement? That's prevented by Alice sending Bob a short "fingerprint" of the message called a "hash", or sometimes a "message digest". A hash is a relatively short string of bits, usually 160 bits long, that results from running the full message through a "hashing" algorithm. Such hashing algorithms are designed to produce a value that is unique for each different message run through the algorithm, or in other words the probability of two different messages resulting in the same hash must be extremely low. Furthermore, even a single minor change in the message must result in a distinctively different hash.

Of course, hashes are one-way: there's no way to reconstruct the original message from the hash, any more than a sausage can be turned back into a pig. The most popular hashing schemes include:

There is also the "RIPEMD-160" hash algorithm, developed by a team of Belgian researchers; "Whirlpool", partly developed by Vicent Rijman of AES fame, and based on AES; and the "BLAKE" series of hash functions, put together by an international team of researchers.

In any case, to make sure that Bob can't tamper with the agreement, Alice sends a hash of the text along with it, with the hash encrypted with her private key to ensure that she's the only one who could have originated it. If Bob tampers with the agreement, it won't match the hash any longer. This encrypted hash is actually formally known as a "message authentication code (MAC)". This process of private-key encryption of both text and its hash is formally known as "digital signing", and decrypting with a public key is more formally known as "verifying".

* The next question is: if Bob gets Alice's public key, how does he know it is valid? That's done by organizations known as "certification authorities (CA)", which can a non-profit, commercial, or governmental organizations. Any user who wants to be certified can deal with a CA, which will validate the user and provide a "digital certificate" or "cert" -- which consists of a public key, data on the owner of the public key, an expiration date on the certification, and one or more digital signatures associated with the certification authority. There are several levels of certification -- from simply validating that a website is what it says it is, to full validation of the website owners.

There are a number of different digital certificate specifications, such as the "X.509" standard format, devised by the International Telecommunications Union (ITU). Certificates are used in internet transactions to validate one or both sides of the transaction. A CA automatically validates the certs by checking on it through a "public key infrastructure (PKI)" system. Some CAs, like DigiCert, were set up specifically as CAs, while web hosting service GoDaddy does it as a sideline. The biggest CA is "IdenTrust", which is an agent of a group of dozens of banks, including Barclays, Chase Manhattan, Citibank, and Deutsche Bank.

BACK_TO_TOP

[3.2] SSL / VPN / WPA

* The pioneers of public-key cryptography like Whitfield Diffie have been proven right, even conservative, in the use of cryptographic technologies on the internet. For one particularly important example, most people who surf the internet to make online purchases have used cryptography to provide their credit-card numbers to vendors, using the "Secure Sockets Layer (SSL)" protocol. Pages protected by SSL are designated with an "https" prefix instead of the conventional "http" prefix.

SSL was invented by Netscape in 1994. It was adopted by the internet Engineering Task Force's Transport Layer Security (TLS) committee as the basis for a standard in 1996, and the slightly mildly modified variant of SSL that was released by the committee is known by the committee's acronym, "TLS". However, the acronym "SSL" is used in this document. Incidentally, Microsoft tried to push a competing security scheme designated "Private Communication Technology (PCT)", but it didn't catch on; Microsoft ended up adopting SSL.

SSL works more or less transparently to the user, using hybrid encryption. In simplified terms, suppose Alice wants to make a purchase from an online vendor, and needs to give the vendor her credit-card number. After establishing the vendor's identity using a cert, her web browser will obtain the vendor's public key and use it to encrypt a secret symmetrical key, produced at random by the web browser. The browser will pass the encrypted secret key to the vendor, and this secret key will be used to encrypt the session.

Incidentally, when web browsers are installed, they include a table of certs from prominent business concerns. Also incidentally, the SSL protocol doesn't try to validate who Alice really is, at least not by default, since validating online customers is not practical at present. In some critical transactions, a cert may be requested of from Alice.

A range of ciphers, with options for different key lengths, are available on web browsers to support SSL. Even with short keys, the security provided by the cipher is adequate for most purposes. The amount of horsepower required is expensive enough to ensure that nobody would bother to do this to crack credit-card transactions -- the computing power required would make it uneconomical. However, web browsers are very vulnerable to attack by malicious software, "malware", that exploits browser design errors.

* Hybrid encryption can similarly be used to protect emails or text messaging, as well as encrypt voice communications. The popular Signal app, targeted at smartphones but available on most computing platforms, can be used for both encrypted messaging and voice communications.

Another invisible use of cryptotech on the internet is in "virtual private networks (VPN)". As the name implies, a VPN is a private network that can only be accessed by a specified group of users, but still operates on the public internet. It may be done in a "peer-to-peer" fashion, or through servers. If performed through servers, it makes tracing internet traffic back to an individual source very troublesome.

Popular "wi-fi" wireless networks are protected against intrusion by cryptotech. The first, "Wireless Equivalent Privacy (WEP)", was introduced in 1999. It proved weak and easy to crack, being formally replaced by "Wi-Fi Protected Access (WPA)" in 2003; it has since been upgraded to the improved "WPA2", and then "WPA3". WPA3 is robust, but the older specs, including the highly vulnerable WEP, linger for the time being.

BACK_TO_TOP

[3.3] PASSWORDS / MFA / SMART CARDS / FILTERS

* The notion of internet security leads to the concept of general computer security, which renders down to a simple question: who is allowed to access a computer?

Traditionally, the first line of computer defense was the password, which of course is a form of codeword. Passwords are notoriously weak, in large part because users tend to rely on easily-guessed passwords. Today, most systems will grade the difficulty of passwords, and give a warning if the password is weak. A user might still want to use a weak password on a system whose security is a "don't care".

There was an inclination in the past to push users to create very elaborate passwords, consisting of unpredictable patterns of letters and digits -- but that's fading out since such passwords were difficult to remember, meaning users had a tendency to either forget them, resulting is wasted time establishing a new password, or write them down, making the passwords insecure. Besides, when hackers do get into accounts, it's rarely because they were able to guess the password; it's because they stole the passwords through some inside job or using key-logging malware.

Even a brute-force search is easily defeated by a reasonably designed password. Login systems now use "login throttling", ensuring that if a login fails, the login system waits a few seconds before asking the user to try again. That slows a brute-force search down to a crawl. A second level of defense, not much more complicated than the first, is to recognize large numbers of consecutive attempts to log in, obviously indicating someone trying to break in, and take appropriate security measures.

Suppose we created a 12-character password using upper and lower case letters plus numeric digits; that would mean a brute-force search would have to test for 62^12 == 3.2E21 possibilities. Try to crunch through all of them at a rate throttled to once a second, it will take far longer than the age of the Universe.

Now suppose we have a password created from four words out of a list of 250 words arranged in different orders; then we have about 250^4 == 3.9E9 possibilities. Obviously, the first password is much stronger, but it would still take over 120 years to try all the possibilities for the second password, and the login system would get wise after only a relatively small number of attempts. The stronger password is like putting ten thousand locks on a door when ten is neurotic, and people usually just try to get in through the back door anyway.

To be sure, people shouldn't use insecure passwords. One trick that hackers actually can make good use of is to accumulate a few thousand common and predictable passwords and run through the list, which doesn't take too much effort. The military has long known not to give secret projects codenames that have any relationship to the specifics of the project, and similarly users shouldn't come up with passwords that have anything visibly to do with themselves or the account being logged into. It is also unwise to extract passwords from popular tunes or the like that are easily guessed; users are wiser to leverage off something known mostly to themselves and unfamiliar to the general public, or nonsensical-sounding phrases: "greenboatsix".

Selectively mixing in a few substitutions of numbers for letters -- "1" for "i", "2" for "z", and so on -- and selectively applying imaginative spellings to password elements helps, too. If we have any worry about the possibility of a password being second-guessed, we can tack a short security code onto it: for example, "wotzupd0k" might be guessed, but "wotzupd0k-z23" wouldn't.

A hacker can't get a "fingerhold" into a password; if one character is wrong, the password doesn't work, and the hacker has no way of knowing if only one character's wrong, or if they all are. Users should also have unique and relatively tough passwords for critical sites, such as bank or stock broker accounts, and not re-use the same password created for casual accounts where there's nothing more important to steal than, say, an email address.

We don't really need to bust our brains to come up with a password like "Jm0q845xxiN4", and then jump through hoops to make sure we don't forget it. We can do just as well with, say, "yagotabjok1n" or "work1nreverse" or "raz0rblastr" -- something not easy to guess, but not so hard to remember. We can even have some fun with it; it's cryptology for everyone.

Of course, we tend to accumulate passwords in quantity, and there's no way to avoid writing them down. If we keep a file of passwords on a personal computer, that means that anyone stealing or hacking into the computer can get at all the passwords. Not a worry; free encryption software such as GNUPG is easy to get, and we can encrypt the password file to prevent snoops from reading it if they steal a PC or smartphone. All we have to do is remember one password, to allow us to decrypt the password file. "Password manager" applications are available to make the process "user friendly".

* There are variations on passwords, such as "personal identification numbers (PIN)", and the "pattern" scheme used on smartphones, in which a user traces out a "connect the dots" pattern with a fingertip on a 3 x 3 grid of dots. Password schemes are not very strong, and so there's been a push towards "multi-factor authentication (MFA)". MFA, as its name implies, involves having a dual login scheme, one based on "something one knows" and the other on "something one has". "Something one knows" is a password, PIN, or pattern. "Something one has" is physical.

The baseline technology for "something one has" is the "smart card", which is a high-tech charge card -- supporting a specialized form of computer access, to a bank charge account. The smart card is relatively new in the USA, though long-established elsewhere. Due to the inertia of banks and retailers, Americans held on to the old magnetic-strip or "magstripe / magstrip" credit and debit cards, though their security was minimal, the only real protection available being a user PIN. Card data could be easily copied using a "card skimmer" reading device, with scammers ingenious enough to even attach camouflaged skimmers to the front of gas pumps or automatic teller machines.

Smart cards are much more secure. Traditionally, if there were charge-card fraud, the charge-card companies had to eat the losses; they then told retailers that, if they didn't change to smart cards, they'd have to eat the losses themselves. There was some grumbling and problems, but now the magstrip card is effectively extinct.

A smart card has the same form-factor as a traditional charge card, but it contains a processor and nonvolatile memory. It does not have a battery, however; the processor is powered when the card is accessed by a reader, either through an 8-pin electrical contact that also provides data in and data out, or through an induction antenna that also supports close-range "near-field communications (NFC)" -- the card doesn't even need to be taken out of the wallet. Readers increasingly have both contact and NFC interfaces.

The smart card stores data paralleling the data printed on the card, plus a public and a private key. Only the card knows its private key; it cannot be read out, with any attempt to probe the chip wiping its contents. To perform a transaction, a smart card is plugged into a "terminal" or "reader", with an associated "PIN pad", that allows a user to enter a PIN.

The reader obtains the card data and its hash from the card, along with the card's public key -- with the card data and hash having been encrypted with the card's private key, which the reader decrypts using the public key. The card company validates the public key and the data, along with the PIN from the PIN pad; the reader then sets up and validates the transaction with the bank. The smart card only authorizes the transaction, which proceeds through hybrid encryption without further involvement with the card.

The difficulty with this scenario, as described, is that the card data could be recorded by a Black Hat and played back to get access to the bank account. That means the reader has to take an additional step to ensure that the card is valid. The reader generates a pseudorandom number, encrypts it with the card public key, and sends it to the card; the card decrypts the pseudorandom number with its private key, and sends it back to the reader. If the number matches the original, the card is the one with that public key.

* While smart cards have become an effective standard for electronic point-of-sale transactions, using smartphones to perform such transactions is catching on. As with smart cards, a smartphone has a cryptographic subsystem with a completely secret private key that makes it uniquely identifiable, providing support for a transaction app. There are a number of competing apps, Apple Corporation's "Apple Pay" app being one of the better-known. It operates on the Apple iPhone -- of course -- and is based on three primary elements:

A user obtains access to the phone as usual, through password or PIN and the fingerprint reader. The phone can then be used to set up transactions using the Apple Pay account; while Apple isn't very forthcoming about the details, it appears the phone is validated using public and private keys associated with the phone's secure processor.

The iPhone's use of a fingerprint for validation points to the growing use of "biometric ID" in MFA. Along with fingerprints, biometric ID also makes use of palm reads -- the pattern of veins in the palm is distinctive to individuals -- iris scans, and voice or facial recognition, a smartphone using a "selfie" camera to do the job. Incidentally, use of fingerprint readers in Brazil demonstrated they led to a very nasty problem: thieves would simply cut off a finger. Palm readers don't have that problem, since they can't read a severed hand.

For computer logins, hardware keys are also now available; they're conceptually similar to smart cards, featuring a secure processor with memory, with a public and private key for validation. MFA standards have been established and promoted by the industry "Fast Identity Online (FIDO)" alliance, which was established in 2013, and now has hundreds of members.

* One of the difficulties with smart cards is that they don't, at present, buy anything for online transactions in themselves: people still have to buy using the charge-card number, bypassing the smart-card tech. Banks provide some security for such transactions, however, by screening purchases with software -- increasingly based on artificial intelligence (AI) technology -- that checks for anomalous purchases.

The screening system, or "filter", contacts users to notify them of problems and request user action. An email may be sent to clients to tip them off that something seems wrong, and steer them towards corrective action, if needed. An online vendor, if asked to send a product to an unfamiliar address, will similarly send an email requesting validation, typically to re-enter a charge card number.

In the same fashion, banks and other organizations will screen critical transactions -- for example, transfers of large sums of money to another bank -- by sending clients temporary ID codes over email. A client will have to enter the ID code to complete the transaction; the ID code will "time out" after a few minutes, and become invalid. Such multi-factor authentication schemes have a weakness in that they depend on the validity of the client email address on file. Configuring an account with an email address and other data to begin with does require additional security checks -- but in the absence of a robust ID scheme, the system remains not so hard to game.

Filters are also used to maintain computer security. Email filtering has become very sophisticated, capable of detecting unwanted emails -- "spam" -- to the point of greatly reducing the flood of it. Computers also have "antivirus" systems that can check for malware in emails or in the PC system. Neither of these work perfectly, but computer security systems are getting more sophisticated and capable all the time. Computers are gradually acquiring "system assistants" that will, among other tasks, monitor for security threats, and advise users accordingly.

BACK_TO_TOP

[3.4] AZURE SPHERE

* Computer security schemes rely on a common set of technologies, but they aren't highly standardized. Microsoft Research developed a standardized computer security scheme under Project Sopris, with an emphasis on low cost and targeting "internet of things" devices.

Project Sopris was guided by a list of principles, the "Seven Properties of Highly Secure Devices", which define a security architecture applicable to all computer systems, including:

The Sopris effort led to Microsoft's commercial "Azure Sphere" security system, which is based on three elements:

Azure Sphere is still fairly new, having been introduced in 2020. It is not yet clear how widely accepted it has been, but it has received good press. It may well be a stepping stone towards a universal computer system security standard.

BACK_TO_TOP

[3.5] MISCELLANEOUS CRYPTOTECH

* It should be noted that, though AES with a 256-bit key is seen as satisfactory for commercial and many government purposes, it is not seen as good enough for the most secure government entities, for example the President of the United States. While a "smartphone" using AES with 256-bit keys is highly secure, it's not secure enough for the president, who instead carries a specialized ultra-secure smartphone. The problem is not AES, which can do the job, it's the smartphone; commercial smartphones are known to suffer from operating system "holes" that can be exploited by hackers.

The presidential smartphone uses a highly secure voice scrambling scheme known as the "Secure Communications Interoperability Protocol (SCIP)". SCIP is not extremely robust, but distribution of SCIP phones is tightly controlled. For data messages, it uses the NSA "Type 1 Suite B" algorithm, whose details are secret.

The presidential smartphone is designed to access the "Secret Internet Protocol Network (SIPRNet)", a top-security network limited to a few hundred thousand people; it can also access the "Nonclassified Internet Protocol Network (NIPRNet)", the government "sensitive but unclassified" network, as well as public networks. Incidentally, by law all presidential calls have to be logged, and all email messages stored and available under subpoena.

* At the other end of the range of applicability, cryptotech is in common use in the form of "remote key entry (RKE)" systems, sometimes also known as "remote keyless entry" systems. RKE systems have been around for decades in the form of garage-door openers, but the technology has been refined and is now in more widespread use, particularly in the form of RKE systems for cars.

The old garage-door openers are still around, but they provide convenience, not security. The remote and receiver units could be set to unique 8-bit codes using mechanical switches, but that was mainly to eliminate interference between neighbors who both had remotes. Modern automotive RKE systems use a much longer code sequence that changes between uses. If a keychain RKE unit always sent the same or a predictable access code, a potential intruder could simply record its transmission using a radio scanner and play it back later to gain access.

Cheap integrated circuits are now available that provide improved security by generating binary code sequences of 40 bits or so. The simplest such security scheme is the "rolling code" algorithm, basically a form of pseudo-random number generator, which is used with one-way RKE units that can send a code but cannot pick up an acknowledgement from the receiver.

In the rolling code scheme, both the RKE unit and the receiver are set to an initial code seed and "rolling algorithm". Every time the key sends an access code to the receiver, both update the code identically according to the rolling algorithm. The initial code seed ensures that the rolling code sequences are effectively unique to a particular RKE system, and the sequences are designed so that it is difficult to backtrack through the values and find the seed. Since the receiver will not always pick up the RKE unit's radio signal, the receiver can "look ahead" 256 codes and still unlock the automobile.

This leads to a problem if the number of unreceived RKE transactions exceeds 256, or somewhat more plausibly the RKE unit is lost or broken. For that reason, automobiles with RKE systems have a "reset" capability. The owner gets in the car using the old-fashioned mechanical key, follows the reset initialization procedure -- say, turning the ignition on and off eight times in less than ten seconds -- and then pushes the button on the RKE unit. The receiver then syncs up to the RKE unit.

Two-way systems offer better security at higher cost. The RKE unit transmits an access code; the receiver reads the code and replies; the RKE unit acknowledges in turn; and the receiver then unlocks the doors. This means that both the RKE unit and the receiver can increment their rolling codes in step, eliminating the need for look-ahead. Incidentally, many RKE units can perform multiple functions, such as unlocking the doors or the trunk. This is done by sending a function code along with the security code.

* While the internet has led to aggressive development of the new ciphers, it has also led to a revival of the ancient art of steganography, or hiding data.

Audio and image files tend to be large, and so they make convenient hosts for hidden data. Audio files consist of sequences of digital sound samples that give the value of the audio waveform taken at successive short intervals. For example, standard CD-quality audio consists of 16-bit samples taken at a rate of 44,100 times a second. Data can be hidden in the least significant bit of each sound sample of an audio file. The modification amounts to no more than adding a very slight amount of noise to the file, so slight that nobody could tell the difference between the original and modified files on listening to them (though fanatical audiophiles would no doubt claim they could).

Similarly, a typical image file consists of a grid of color dots, or "pixels", with the color of each pixel defined by three 8-bit values, giving the levels of red, green, and blue in the color. Data can be hidden in the least-significant bit of each of these values, and if the image is a color photograph or other image without large uniform blocks of color, the hidden data cannot be detected simply by looking at the image.

The advantage of steganography is that messages can be distributed without the authorities being aware that they are being sent. There is no reason the data could not be encrypted as well before being hidden. A number of steganography packages are now available for popular use.

Of course, there is a limitation to digital steganography in that "lossy" data compression schemes, which throw away components of the audio or image data to permit a more compact file, cannot be used since they are almost certain to throw away the hidden data. Lossy audio compression schemes such as MP3 or WMA, and lossy image compression schemes such as JPEG, are not appropriate for steganography; non-lossy WAV audio files or PNG image files are much better suited to the job.

It is suspected that criminals and international terrorist organizations find steganography using image files a particularly useful way to communicate secret information. One of the problems with any internet communications from a terrorist's point of view is that they are generally from a specific sender to a specific recipient, allowing security organizations to trace links in the terrorist organization. This problem can be avoided by including secret information in pornographic images, which are then uploaded to a pornographic website, preferably one with a large amount of traffic. This allows the recipients to hide their pickup of the message in a flood of other traffic to the website.

New digital steganography techniques continue to be developed. One scheme is based on "voice internet protocol", the scheme in which phone conversations are digitized and then sent as sets of packets over the internet. Since conversations have moments of silence, some of the packets will be empty, with these packets being shorter than packets containing voice data. The empty packets can be used to discreetly transfer data. It is possible to transfer data at rates of up to two kilobits a second by this means, adequate for handling text data, and the trick is difficult to detect.

* One of the more interesting examples of modern cryptotech is the "blockchain", which is a scheme for establishing a non-repudiable chain of transactions. When a transaction is performed, it is hashed along with the hash of all previous transactions; the transaction and its place in the transaction sequence can no longer be successfully faked. A blockchain is distributed, with everyone involved in the transactions maintaining a copy of the blockchain and receiving regular updates of it. That makes it increasingly cumbersome and inefficient as the number of participants and transactions increases, and so there is a push to come up with a derivative scheme that is more efficient.

Blockchain is nonetheless the basis for a "digital currency" named "bitcoin". The bitcoin system is based on a decentralized network and has no central issuing authority. Servers in the network generate ("mine" in the somewhat quirky terminology) units of bitcoin by an algorithm that slows the release over time, to zero out at 21 million bitcoins by 2050. The bitcoin currency unit is of course the "bitcoin", though of course the system permits transactions in fractions of bitcoins. A blockchain keeps track of issue of bitcoins and transactions in bitcoins.

Bitcoin is not pegged to any government currency. Not surprisingly, it has proven controversial, mining being a staggering energy hog -- and more significantly, bitcoin acquiring a reputation as a monetary fantasy, a scam. The whimsical valuation of bitcoin made it ideal for speculation, traders working on the "greater fool" theory, that it's worth buying at a high price, in hopes that a greater fool will buy it at an even higher price. American billionaire Warren Buffett famously called bitcoin "rat poison".

Bitcoin has actually seen some use for payments on the "dark web", the component of the internet engaged in illegal transactions. The dark web also uses "The Onion Router (TOR)", a network routing system with some similarities to a VPN. Like a VPN, TOR is a network based on encrypted communications; it extends the idea by ensuring that any message sent on the network is relayed to every node on the network, in principle ensuring that communications are untraceable. However, they're only untraceable in principle, schemes having been determined to crack TOR and bitcoin. The fact that bitcoin is based on blockchains makes it particularly troublesome to the Black Hats -- since if the authorities get their hands on the blockchain, they have a full and verified record of all bitcoin transactions.

Other blockchain-based digital currencies have been devised, but none that have been as popular as bitcoin. Following bitcoin, a fad started to promote "non-fungible tokens (NFT)", which were some arbitrary thing -- a cartoon of a "bored ape" being the best-known -- was validated with a blockchain. In effect, NFTs were the same as bitcoin, useful only for speculation, differing only in not requiring the troublesome mining. There was some interest in using blockchains for more practical purposes, such as validating deeds, but in practice blockchains have turned out to be too cumbersome, and not really more secure than less troublesome schemes.

* Modern military forces have of course also adopted new cryptological technologies. Combat forces are very dependent on radio communications, which faced with three problems: jamming, eavesdropping, and spoofing by adversary forces. In the 1980s, the US military developed a digital "Single Channel Ground & Airborne Radio System (SINGCARS)" for tactical communications that used "frequency hopping spread-spectrum (FHSS)" technology, in which a transmission "hops" from one frequency to another in a band according to a "hopping pattern". Trying to jam the radio over the entire band was problematic, and the signal could only be heard by a receiver following the same hopping pattern.

A SINGCARS radio could operate in a single-channel mode, allowing it to communicate with radios lacking frequency hopping. In frequency-hopping mode, it could operate on any of 2320 channels between 30 and 88 megahertz (MHz) with a channel separation of 25 kilohertz (kHz), hopping between frequencies at 111 times a second. The radio also used data encryption, such as AES-256, to guarantee data security. Error correction schemes were implemented to ensure reliable delivery of digital data.

Exactly how keys were distributed around a radio network and how the security of the keys was maintained is not clear. Although SINGCARS could communicate with tactical aircraft, the Air Force introduced their own frequency-hopping radio, the HAVE QUICK system, in the 1980s. "Software-defined radios" have emerged that can operate as a wide range of military radios, so interoperability is less of a problem. SINGCARS is now being updated to a new, more sophisticated generation of technology under the US Army "Combat Net Radio" program, while HAVE QUICK is being replaced by the "Second Generation Anti-Jam Tactical Ultra-High Frequency Radio (SATURN)", which is defined as a NATO standard.

BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME