We are the dangers for ourselves

In 2014, Elon Musk claimed that  AI could pose a greater threat  than nuclear weapons.  The danger of AI is much greater than  the danger of nuclear warheads by a lot.  It leads to growing concerns among scientists  about the need for AI regulation,  but there’s an even more unsettling  development on the horizon:  Quantum computers.  Experts anticipate these super-powered  machines will soon become accessible  to everyone. 

The question that arises is  What happens when AI and  Quantum Computing join forces?  Quantum Computer is the wild card.  It could be a game changer.  It could change the entire landscape of  artificial intelligence.  Could it be possible that the combination of  quantum computers and AI really reach  such a potent level?  There is a competitive race among  industry tech leaders to launch the first  viable quantum computer.  A device that promises to be significantly  more powerful than current classical  computers.  When the Wright brothers were pioneering  the concept of flight, Their  initial attempts weren’t  immediately faster than a horse.  In fact, a horse could have outrun their first  powered flights.  But what set airplanes apart wasn’t merely  the quest for speed, it was about introducing  an entirely new mode of travel.  An airplane is more than just a faster horse.  It’s a completely different type of machine.  It takes advantage of the higher dimension  right above us, the sky.  This gives us access to a resource that  was previously untapped.  Scientists suggest that the development of  quantum computing is just like this leap  from horse to plane. 

A quantum computer is not merely a faster  classical computer.  It’s a different type of machine that takes  advantage of the quirky, counter-intuitive, and  the untapped phenomena of  quantum mechanics.  This is its higher dimension.  Imagine a computer so powerful  it can simulate novel materials to  sequester carbon from our atmosphere.  A machine that can develop affordable  fertilizers that save energy and conserve  fossil fuels, or one that can tackle problems  so complex that traditional supercomputers  struggle under their enormity.  It’s kind of a crazy sounding idea,  but a quantum computer perhaps can  harness that by doing some calculations  over here, and other calculations over there  in parallel.  Now, it’s doing sometimes twice as many  calculations as a classical computer existing  in one world would be able to do.  This might seem like a fantasy,  like a dream from a Sci-Fi store,  but one company has made this vision a reality.  This is IBM’s Quantum Centric supercomputer. 

This remarkable creation is a striking  testament to human ingenuity.  It stands ready to revolutionize our  comprehension of computing power.  This incredible 100.000 cubic supercomputer  doesn’t just represent a shift in paradigm.  It’s a monumental event.  A quantum jolt that shakes the industry.  That’s why there are literally tens of thousands  of the world’s brightest minds trying to build  these machines and understand them.  And it seems that the scientific community is  split into two passionate factions  when it comes to this field.  The first group is utterly fascinated by  the physics involved.  Here’s a quote from David Deutsch, one of the  respected scientists in this field:  “Quantum computation will be the first  technology that allows useful tasks to be  performed in collaboration between  parallel universes.”  Imagine a world where all known laws of  physics as we know them apply,  but different choices were made.  Choices vary from the movements of tiny  microscopic particles to what you chose  for lunch, or whether you decided to watch  this video or not.  Quantum mechanics proposes a peculiar  prediction that all of these realities are as real  as the one we remember. 

It’s strange because we don’t see these  alternate realities, but we’ve reached a point  in science where we can construct machines  that leverage these other worlds.  Then, there’s the second group mainly from  computer science.  They would argue a quantum computer could  solve problems that even the most advanced  conventional computers can’t.  No matter how sophisticated,some problems are just  beyond them.  But it’s not the case for a quantum machine.  Now, you might be thinking all this sounds  fascinating, but isn’t it just theory  and speculation?  Isn’t it similar to other futuristic ideas  we’ve heard about, things that physics  might allow but haven’t been realized yet?  Well, quite a number of these machines have  already been deployed.  They’re present in research centers  open to the public, following the model  that first introduced supercomputers  to the world.  Who doesn’t know Google’s Sycamore?  This processor is a step forward in  quantum computing, achieving the so-called  quantum supremacy.  A team from Caltech has even put their  wormhole teleportation protocol using  this computer.  After Google announced its Sycamore  processor, scientists in China made another  surprising news in quantum computing.  They’ve built the world’s strongest  quantum computer, the jiuzhang computer. 

Chinese scientists have announced their  development of the most powerful  quantum computer in the world.  It works 100 trillion times faster than  the fastest supercomputers out there.  It took less than a second for a task that  the fastest classical supercomputer  in the world would take nearly  five years to solve.  The team says the device could be applied to  data mining, network analysis, and  chemical modeling research.  President Xi Jinping has said that research  and development in quantum science is  an urgent matter of national concern,  and the country has invested heavily  in this technology, spending billions  in recent years.  From the outside, quantum computers look like  enormous black monoliths, giant metal boxes  about 10 feet wide and 12 feet tall.  They’re powered internally by a fridge,  a refrigerator that chills the chips to nearly  absolute zero.  It’s literally hundreds of times colder than  the vast interstellar space.  This is amongst the coldest and extreme  conditions that humans have  managed to create.

  These fridges have a component  called a pulse tube.  It emits a sound about once per second  which sounds eerily similar to a heartbeat.  Standing next to one of these machines is  truly mesmerizing.  And at the heart of this giant box  is a tiny chip about the size of your thumbnail.  This chip carries all the wonder and magic  that makes this machine operate.  We won’t delve into the mathematical details  of how it all works, but we will offer  a roundabout way of understanding this.  Imagine that parallel universes do exist,  so you have two universes that are exactly  identical in every aspect, from the vast horizon  to the tiniest atomic detail, but with only  one difference.  And that difference lies in  the value of a tiny element called qubit.  A quid is somewhat like a transistor in  a classical computer.  It has two distinct physical states  which we label as zero and one.  In a regular computer, these states are  mutually exclusive. 

That means the device can either be one  or the other, and never anything else.  But in a quantum computer, this device can be  in a strange situation where these two parallel  universes have an axis point.  As you increase the number of these devices,  every added qubit doubles the number of  accessible parallel universes.  So when you have a chip like this  with about 500 of these bits,  you have something like 2 to the 500th power  of these parallel Universe  existing within that chip.  We can think of it as the shadows of  these parallel worlds overlapping with ours.  If we’re smart enough, we can dive into them  and bring them back to our world  to make an effect.  Quantum algorithms take advantage  of the entanglement and parallelism of qubits.  This gives them a considerable edge  over traditional algorithms for specific  problem domains.  They also get information from qubits  at the end of a calculation, but this presents  a unique challenge. 

When you measure qubits, they collapse into  a single state which eliminates their  superposition and entanglement.  So, quantum algorithms use advanced  techniques to draw meaningful results from  measurements before this collapse happens,  which would then maximize the benefits  of computation.  And in fact, we have a trend in quantum  computing similar to Moore’s Law.  The number of qubits on a chip has doubled  every year for the last nine years.  If you put it on a chart, you can actually see  where certain technologies kicked in  where people were ahead of the game  or behind the eight ball and lost out by simply  looking at Moore’s Law.  This exponential growth opens up  unprecedented capabilities not just in the realm  of processing speed, but also in the exploration  of fundamental quantum phenomena.  Such a strange quantum phenomenon has  truly exemplified the potential of this.  It’s called quantum teleportation.  Back in 1999, Isaac chuang and his team  at IBM successfully implemented  the quantum teleportation protocol.

  It is a technique for transferring quantum  information from one place to another  remotely using entanglement.  But what does that really mean?  can we teleport people like  we’ve seen in Star Trek?  In principle, quantum teleportation needs  quantum entanglement to ensure that  the state of one particle is instantly transferred  to another no matter the distance.  So, the particle itself doesn’t travel,  only its state or information.  This idea was first put forward by  six scientists.  One of them was Charles Bennett in 1993.  Isaac Chuang put this theory into practice  just six years later, but you might ask  why do we even need this special process?  why can’t we just copy things like we do  on regular computers?  Let’s demonstrate how it might work.  Imagine you’re trying to teleport a baseball  from California to New York.  So first, you entangle two baseball particles  in California.  This allows for information sharing, but it’s not  teleportation yet.  Next, you measure one particle producing  two bits of data, and make the baseball  disappear from California.  You send these bits to New York at the  speed of light, say via fiber optics.  Once there, these bits help recreate  the baseball.  And that’s it!  The baseball’s information has moved  from California to New York without  being duplicated.  Even so, scientists still  have a lot to figure out.  Two big issues are how to keep quantum information  safe when we create entanglement,  and secondly how to send quantum bits over  long distances without losing any data.  We do expect to be able to teleport molecules,  maybe water carbon dioxide, maybe even DNA,  maybe organic molecules. 

Now, to teleport a human raises  all the ethical questions because the original,  first of all, has to be destroyed  in the process of quantum teleportation.  With quantum entanglement, your message is  linked to another one, and can change  the other message instantly  no matter how far apart they are.  So, when someone tries to intercept  this transmission, they would mess up  the quantum state of the particles,  and we would immediately spot it.  It’s a big step forward for keeping  information safe in the world of  quantum computing.  Before we dive into what artificial intelligence  and quantum computers can achieve,  let’s first explore the Brief  history of quantum physics.  Quantum computing has deep roots  that trace back to the early 20th century.  The journey began with the revolutionary work  of Max Planck in quantum theory,  and continued with many critical milestones.  Planck introduced the idea of quantized energy  which set the groundwork for quantum theory.  His theory suggests that energy isn’t  continuous, but comes in discrete packets  which is known as quanta.  It explained the concept of black  body radiation in a way no other theory could.  Building on Planck’s theory,  Albert Einstein presented a bold idea in 1905.  It is known as the photoelectric effect.  He proposed that light has a dual nature  behaving as both particles and waves.  He suggested that light’s energy is  also quantized into discrete packets  which we now call photons.  Niels Bohr took quantum theory a step  further in 1913 with his model of  the hydrogen atom. 

He theorized that electrons exist in  specific energy levels or orbits  around an atomic nucleus.  According to Bohr, these electrons can shift  between these levels by either absorbing  or emitting energy.  In 1925, Wolfgang Polly introduced the concept  of quantum superposition.  He proposed that particles could exist  in multiple states at once  with their precise properties  only determined when observed.  Werner Heisenberg expanded on this idea  in 1927 with the uncertainty principle.  This principle states that you can’t  simultaneously know certain pairs of  properties; such as the position and momentum  of a quantum particle.  In 1935, a game-changing paper by  Albert Einstein, Boris Podolski, and  Nathan Rosen was published.  This paper, known as the EPR paper,  introduced the concept of entanglement,  and like we mentioned in the previous chapter,  it is a strange but fundamental aspect of  quantum mechanics where two or more  particles are intertwined.  Fast forward to 1951, When Alan Turing first  proposed the idea of quantum computing.  Turing envisioned a machine operating  on quantum mechanical principles which  could outperform classical computers.  But for several decades, this concept remained  mostly theoretical due to the challenges  of practical implementation.  Significant progress was made in the 1970s  by physicist Richard Feynman.  He recognized that quantum systems could  simulate physical systems which are tough to  model with classical methods. 

Feynman proposed a universal quantum  simulator, a machine that can accurately  replicate any physical system.  In 1982, physicist Paul Benioff put forward  the concept of a quantum training machine.  This theoretical model operates on  quantum principles.  The 1980s also saw the birth of  quantum algorithms.  A physicist by the name of David Deutsch  also came up with the concept of  a quantum algorithm.  He proposed that a quantum computer could  tackle certain problems faster than  traditional computers.  He introduced what’s now called  the Deutsche algorithm.  In 1994, Peter Shore Made  a significant breakthrough  with Shore’s algorithm.  It was able to show how quantum computers  could efficiently solve integer  factorization problems.  What made this so groundbreaking was  its potential threat to the security of  public key encryption systems  since it could crack classical  encryption algorithms.  When it comes to experimental progress,  various research groups made important  strides in creating quantum computers and  exploring different uses for qubits.  One key development was a collaboration  between Google, NASA, and D-Wave Systems.  Together, they launched the D-Wave 2  quantum computer in 2013.  This was one of the first commercially  available quantum computers.  what set it apart was its use of a different  approach known as adiabatic quantum computing.  By using superconducting qubits,  it was designed to solve optimization problems  In 2016, IBM grabbed the spotlight  by introducing the IBM quantum experience.

  This was a cloud-based platform where users  could experiment with a small quantum  computer made up of five  superconducting qubits.  Their goal was to give more people access to  quantum computing and to encourage  collaboration within the quantum community.  With the IBM quantum experience,  researchers and enthusiasts around the world  could study quantum algorithms and  conduct experiments from anywhere.  Then in 2017, we saw a major milestone with  the demonstration of qubits that could correct  their own errors.  Physicists at Yale University came up with  a breakthrough design known as  the surface code.  This allowed them to create logical qubits with  the ability to correct errors.  It was a key step to reaching the goal of  fault-tolerant computing where information  can be kept safe from errors and decoherence.  Companies like Google are pushing hard  to build practical and market-ready  quantum computers.  This surge is not merely an effort to ramp up  computational speed.  It’s about paving the way for a future  where the boundaries of technology  are redefined.  Artificial intelligence is already making a place  for itself in this quantum-powered future  and what’s more, AI can now enhance images  without even looking at the original picture.  It does this using only the results of  brain scans from MRIs.  Tristan Harris and Oza Raskin recently brought  attention to this.  They shared a new video called  “The AI Dilemma”  following their Netflix documentary,  “The Social Dilemma.”

  and they taught the AI, “I want you to translate  from readings of the fMRI, so how blood  is moving around in your brain,  to the image.”  “can we reconstruct the image then?”  You know, the AI then only looks at the brain,  does not get to see the original image,  and it’s asked to reconstruct what it sees.  The research reveals some concerning data.  For instance, half of AI researchers think  there’s at least a 10% chance that humans  could become extinct because we might not  be able to control AI.  The cause for this concern is the latest data  showing that AI’s capabilities continue to  double every few months.  What is horrifying is the prediction that,  by 2045, AI could reach the point of singularity.  This means, it will match the combined  capabilities of all human brains.  Could our rapid advancement in quantum  computing be contributing to this?  Some people are concerned about  this possibility.  It is the worry that AI will eventually develop  human-like consciousness,  and like something out of a movie,  turn against us.  Even Elon Musk has shared  some of these concerns.  I’m really quite close to the cutting edge in AI,  and the rate of improvement is exponential.  It scares the hell out of me.  What Elon’s talking about sounds a lot like  what happens in the movie Ex Machina.  This movie tells the story of a beautiful  AI-powered robot that ends up  killing its creator.  Interestingly, Ex Machina is derived from  the phrase ‘deus ex machina’  a Latin expression that means  ‘God out of the machine.’  In the movie title, the word Deus is dropped. 

It seems to suggest that, in the world of AI,  there’s no need for a divine role  when humans can create conscious machines.  And this brings us to the Turing test  aka the imitation game.  This is a simple game involving  three participants.  The aim is to see if a machine can convincingly  mimic a human.  The game comes from Turing’s paper entitled  ‘Computing Machinery and Intelligence.’  This paper laid the groundwork for  artificial intelligence.  Turing’s fundamental question was  “Can machines think?”  And by thinking, he didn’t mean simply  calculating or executing tasks like  modern computers.  He meant genuinely thinking like humans do.  This question in essence is philosophical  rather than technical because there’s one key  difference between humans and machines,  and that difference is consciousness.  Consciousness has sparked endless debates  among scientists and philosophers alike.  Is it a part of the brain?  or does it exist independently?  Generally, there are two viewpoints on this,  dualism and materialism. 

Dualism believes that consciousness is  separate from our physical selves which  often referred to as the soul or spirit.  Materialism, on the other hand, refutes  the existence of a soul or spirit.  According to materialism,  consciousness is purely a mechanism of  the brain.  Since our brain operates like a complex machine,  The natura

Leave a Reply

Your email address will not be published. Required fields are marked *