Bridging the divide

B: So are you done or you want something else?

G: Well, I am almost full, but maybe could do with another beer

B: Alright..excuse me, can you repeat the beers please?

B: Great..so apart from work, what’s up?

G: Well, mostly same old same old..some action on the environmental stuff

B: What! Still fighting the green battles? I thought u had moved on from those childhood pastimes

G: If you are going to be as dismissive as always, then let’s talk something else

B: Hmm..no, for once let’s talk about it seriously..I don’t doubt the good intentions of the greenies, but their approach to solve problems is pointless

G: And why is it pointless?

B: Ok, consider the whole peak oil thing in the early 2000’s. There were all these proclamations of how we were running out of oil, how it would go up to 500$ per barrel, and that we had to give up a lot of modern conveniences and go back to the meager lifestyles of our ancestors to survive.
And then what happened? Fracking technology which was being developed for decades unknown to the masses was eventually successfully applied, and the world is producing more oil than ever.

G: And will all that fracked oil not run out one day? To me that seems more like kicking the can down the road.

B: You can’t think of extractable reserves of oil in a linear way. They are also a function of the available technology, prevailing price etc. But that’s beside the point. We are now rapidly moving to electric cars and electric transport. The energy is increasingly getting generated through solar and wind technologies. Technology never stands still.

G: That’s moving the goalposts a bit. The peak oil guys were concerned more about the finiteness of oil rather than the finiteness of energy itself. A lot of them were pretty excited about renewables though they were not that optimistic about the timelines involved. Ok I admit, we have been surprised how soon renewables and batteries have come down in cost.
But that’s again beside the fundamental point.
To me this unbridled technological optimism to the point of recklessness seems as if constantly jumping off a high cliff, and then expecting a whirlwind to arrive in the nick of time and deposit us back to the top. Ofcourse we haven’t touched on even larger issues but we will get to them later.

B: But it has never failed to happen. The ingenuity of the human mind is infinite when it comes to problem solving. Consider this, we have even come up with AI systems that beat us hands down in games like Chess and Go. We are creating things smarter than us. And super intelligent beings will one day be our descendants, beings far more capable than us at finding solutions.

G: We do however eventually run into constraints imposed by the laws of physics. Faster than light communication for example is not possible. Another example, for a long time, optimists felt that Moore’s law would continue indefinitely. Now people are not so sure about that.
Also, just thinking about the long term and ignoring short term issues is highly irresponsible. There are many examples where many of us have been hurt by the side effects of technological development. The higher incidence and severity of droughts and storms is not likely coincidental and it has killed many. Can you with a straight face tell those who have lost loved ones that technological fixes will turn climate change into a transient problem? No amount of solar panels and electric cars can compensate for their loss. AI enabled weaponry is another menace brewing over the horizon.

B: While I don’t have a ready answer to that, it’s also irresponsible to overly think of short term effects and deemphasize the long term structural advantages brought about by technology. The industrialization you indirectly blame..for climate change effects..it has also made the world much wealthier and lifted billions, literally billions out of poverty. AI will also bring about affordable driverless taxis making car travel accessible to all sections of society. AI will bring robotic assistants and companions for the elderly improving their quality of life significantly.

G: I can’t deny that. I guess then we both have some answers but not all. Discussion and compromise seems the only way forward.

B: I agree.

G: Good to see you appreciating the alternate point of view, and not being your usual dismissive self.

B: Haha. Maybe I am feeling accommodative due to a the beer. But I am happy you have found a way to feel victorious in the debate!!

G: No need to get so touchy. I am just pleased you agreed with me for once.

B: Alright..what were the larger issues you alluded to earlier?

G: Well, some philosophical ones. Technologists view nature as something to be conquered and controlled. Environmentalists view it as something to coexist with. We have to protect ourselves from its hard side and revel in its good parts, not replace it entirely.
Why cut a tree if a branch will suffice!
And even about the branch
One should think twice..

B: Interesting. Time to go now but we should meet again..

Advertisements

Fusion research is relevant despite renewables

We all know the story. In the 1950’s when energy extraction from fission processes had been industrialized, eyes turned to the potential to do the same with fusion. Hydrogen isotopes are far more prevalent than their uranium counterparts, the potential energy release per unit of mass is larger, and defence departments had already demonstrated the proof of concept in their experiments and trials. ‘Too cheap to meter’ energy seemed to be on the horizon.

It’s 2018, and although large scale R&D continues at organizations like ITER, NIF and some private organizations, and from time to time we hear about tantalizing results, commercial generation still seems a few decades away. In the meantime, renewable generation mechanisms such as solar photovoltaics and wind turbines have drastically reduced in cost (in fact cost reductions are ongoing) and their adoption is in full swing. In several parts of the globe, they are already the cheapest source of electricity, and threaten to dominate everywhere within a few years.

It’s then natural to ask whether continued massive investments in fusion R&D is justified. PV enthusiasts in fact jokingly point out that we already have a natural ginormous fusion reactor in our celestial neighborhood, has been operating reliably at a safe distance for billions of years, and powers virtually all life and processes on Earth. Why reinvent the wheel when all you have to do is perfect riding atop the existing one on your way to town? Maybe all that R&D money could be put into making 4-junction solar cells with 46% efficiency as cheap to produce as their monocrystalline silicon counterparts currently operating at roughly half of that!!

But once we shift our focus from our earthly abodes, and think of ourselves as a space-faring civilization of the hopefully-not-so-distant future, it becomes apparent why mastering energy extraction from fusion is still necessary.

Consider the Oort cloud, a vast cold region of our extended solar system, occupied by trillions of frozen bodies. Being at distances greater than 2000 AU (1 AU is roughly 150 mn km) and stretching to 50000 AU, the Sun appears to be just a bright star (still the nearest star), much dimmer than the moon. Someone deciding to stay there for a meaningful amount of time will find it far more worthwhile to extract hydrogen from the frozen water and fuse it rather than rely on sunlight.

Similarly, a spaceship on a journey to Proxima Centauri or a further star cannot feasibly use the nearest star’s light for long segments on the way. Nor is it practical to have a large load of fuel on board. As concepts like Project Orion envisaged, it may make more sense to hop from icy planet to icy planet reloading ‘fusion fuel’, just like a car drives from one gas station to the next.

Even more speculative, we could have civilizations on tidally locked planets where the bright side engages in geopolitics and blocks energy flow to the dark side. Or a malevolent entity placing a structure at a suitable distance between a planet and its star, causing a permanent eclipse. Or a volcanic eruption on a planet which blocks out its star’s light for months or years. Fanciful as these scenarios may appear, it never hurts to have options.

Direct applications and use-cases aside, man-made fusion has revealed itself to be a significant scientific and engineering challenge. It’s often interesting to see the new approaches and technical refinements researchers come up with to keep moving forward. And just like the Appollo program or CERN have had a great technological impact outside their core fields, we can expect something similar to happen through fusion research.

So let’s celebrate it and cheer it on. I, for one, eagerly await our fusion powered journeys into the unknown.

Can God be ever understood in its entirety?

Right off the bat, this is not a religious or spiritual post. The aim is rather to have a scientific conception of God, and what that leads to.

How one perceives God and its characteristics is generally a personal matter, dependent on the belief systems/society/milieu/experiences one has grown up with, and what one expects God’s role to be. But amongst this diversity of thought, there are some commonalities, which has led to most philosophers agree on frameworks like the 3 O’s – omnipotence, omniscience and omnipresence.

From a scientific standpoint, there is one clear candidate which satisfies these three qualities – the Laws of Physics (LoP), or synonymously, the laws of nature. LoP is by definition supposed to govern everything, everywhere; nothing can be excluded from its grasp. Although Quantum Mechanics introduces the role of chance and uncertainty at microscopic levels (and even this remains a matter of interpretation; determinism hasn’t been completely ruled out), it also states that information is always preserved. General Relativity retains a firm control over macroscopic behavior, to the extent it is possible to predict the trajectories of heavenly bodies out to millions of years. LoP can be said to remember the past at all scales, and even know the future at larger scales.

It is also clear that we currently don’t fully understand LoP yet, although natural philosophers and physicists have made significant progress over the last few millenia. To reconsider the last paragraph, we have two theories QM and GR, dominant at different scales. Till there is just one all encompassing theory explaining all phenomena (often referred to as ToE or Theory of Everything), it would be more accurate to say we have good approximations of reality, which bear resemblance to the truth but are not themselves the truth.

There is a distinct possibility that we will never comprehend LoP completely, but have only further and further progress towards that end. For one, to make testable and precise predictions, any physical theory including ToE has to use mathematical structures. These mathematical structures can be of various kinds – real, geometric, arithmetic, probablistic, often all blended together, but atleast the arithmetic component will be subject to Godel’s incompleteness theorems. Hence there would always be some physical phenomena, possibly obscure and subtle, that will be visibly true but cannot be proven using the existing physical theory. Creativity would be required and new synthetic apriori judgments established which would extend the set of provable arithmetic statements, and in turn the set of physical phenomena recognizable by the physical theory. By definition, such an extension process would never end.

Secondly, even where theory gives the right predictions, there would be computational complexity problems if the P vs NP problem gets resolved in the expected direction. It may turn out that a quantum mechanical simulation works fine upto a certain size of input parameters, but would take unfathomable amounts of time and energy to deliver with real life parameters. This would not invalidate the theory, but may make it impractical to answer some pertinent facts about LoP.

Thirdly, while debate continues on whether individuals have free will or not, to the extent that LoP enables free will actions, this introduces a degree of unpredictability to how specific events will play out. While all possible actions will still be governed by LoP, which particular action will be undertaken in a given situation wouldn’t be known in advance. A technologically advanced free agent in the far future may turn a pulsar into a disco light or use it as a communication device, which would influence it and invalidate our past predictions about its trajectory, made assuming that it would be left alone. Free will does not make inaccessible any aspect of LoP and hence does not hinder our pursuit of understanding it, but while applying that understanding, we have to replace the notion of deterministic predictions with conditional ones. Somewhat similar to what we did earlier to account for QM, but the set of possible events is now much larger, and their probabilities much more difficult to estimate.

All that said, the good thing is that physics and mathematical research, based as they are on the scientific method and strict requirements of falsifiability, have the best chance to take us the farthest and move us closer to God. As a society, we would hence do well to invest more in these disciplines.

Atoms to Electrons to Photons

To get directly to the point and without a cute preface, it seems a civilization’s progress over the long term can be measured in terms of what proportion of its activities are carried out by photons v/s that carried out by atoms. During the Homo Erectus age, this ratio was close to zero. At best, we relied on the sun’s photons to see things around us and our internal organs (especially the brain) relied on some electrical activity (mostly ionic rather than electronic) to function. As we advanced and made important discoveries, we gradually improved this ratio, but the transformation really kicked off into exponential gear once we discovered electricity near the late 18th century (though electrons would be discovered much later), and then again once when we discovered the nature of light in the 19th and 20th centuries.

Consider how long distance communication has evolved. Beginning with human messengers (and pigeon messengers), to the development of a postal system, and then the emergence of the telegraph and telephone, to the present state where a worldwide fiber optical infrastructure and over-the-air electromagnetic signals support the vast majority of cellular telephony, email and other forms of communication.

Or consider how movie making and distribution has transformed. In the late 19th and early 20th century years when commercial movies first began to be screened, scenes were captured on giant rolls of film which had to be then painstakingly developed, cut for editing, and copied through contact printing. By the 1990s, digital cinematography started replacing physical films in the production stage. On the distribution side, we got CDs and DVDs which were mass produced and affordable enough for home theaters and computers. At the present, the preferred way to distribute movies is through data streaming from datacenters to end-user devices, eliminating any physical media in between.

There are many other examples – online shopping, attending MOOC classrooms for learning, even electronic voting. It’s often said nowadays that people, institutions and businesses have to embrace digital to remain relevant in the modern economy. But that’s just a part of the picture. It’s one thing for two end-points, like a doctor and a patient, to be interacting digitally; it’s completely another for the end-points to be metamorphosizing themselves.

A great deal of research is ongoing to have photonic waveguides and optical transistors in microchips, since electron flow through copper wires makes heat management impractical beyond a point. Energy production is rapidly moving towards solar panels, eliminating the digging of combustible material, steam formation and turbine rotation, all highly physical processes. And to think about it, the logical conclusion of Virtual Reality is a San Junipero like setup, where our uploaded minds exist forever, without deterioration or degradation, as energetic processes in reality simulation machines.

One of my favorite examples is that of space travel. This is a great time to be following this field given the likes of Elon Musk are making a concerted effort to affordably move people to Mars and other planets. The Breakthrough Starshot project is trying to send light sail spacecraft light years away to Proxima Centauri, the star sytem nearest to the Sun. One could imagine that as we miniaturize our devices to smart dust like form factors and upload our minds in them, we could embed ourselves in such spacecraft and start our own interstellar journeys. But over the long term, all these approaches are likely to get trumped by our photonic versions.

Suppose a group of miniaturized intrepid adventurers get on board a light sail to reach a destination 100 light-years away. Assuming the sail travels at a maximum of 0.2c and ignoring relativistic effects and acceleration/deceleration effects, it would take them about 500 years to get there. Now suppose in 50 years, we are able to tightly beam a copy of our electromagnetic minds together with some ‘wrapper’ beam towards the same destination. This beam would arrive there in 100 years, and the wrapper part of the beam would use the available materials at the destination to bootstrap a basic device which would then hold the copy, and through several bootstrapping iterations, we would have something complex and capable enough to function as a distant clone of ourselves. Now one may argue that this is too sci-fi, but the only unproven part is how will electromagnetic radiation in the wrapper part influence the molecules and atoms at the destination to self-assemble into an initial device. Once an initial device is ready in the form of an intelligent makerbot, it can start replicating and generating more complex versions of itself. On the self assembling part, there is fundamental research ongoing on both self-assembling nanostructures and atom-photon interactions, and one can be hopeful of where it will lead!

Anyways, regardless of what science will or will not deliver, the general point is our activities will be increasingly coordinated through photonic mechanisms rather than electronic or atomic ones. We are moving from being mass-like to being energy-like. And I, for one, welcome our future c2 amplified selves.

Shared, Electric, Autonomous

Anyone who has been following the automotive industry must be aware of the massive disruption taking place. If you live in a big metro city, you may be aware even otherwise. Except in the luxury and utility segments, the convention of the gas-guzzling, single owned cars is gradually dissolving away. Taking its place is a troika of three revolutionary and complementary concepts, one already in full blown action, one in the early adopter phase, and the final one in advanced stages of testing and commercialization.

So what are these three concepts?

The first one is Shared Mobility – commuting in a car, point to point, but together with others, like in a train. For short to medium distance rides, it brings together the best of both worlds – you can travel quite affordably by sharing the cost of service with co-passengers, without the ‘inconvenience’ and time investment of first going to a explicit shared mobility area, like a train or bus station. Not to mention, when done on a mass scale, it can bring about significant reductions in traffic congestion.

The second one is Electric Cars – driving through electrical energy stored in batteries, rather than burning fossil fuels, and replacing pistons and internal combustion engines with motors. Not only is the kinetic/chemical energy conversion ratio much higher with electric cars, resulting in a much higher mileage and hence low running cost, there is just no pollution. The battery+motor combine emits nothing except motion!

The final one is Autonomous cars – and I mean the Level 5 kind, which does not require human supervision at any step of the journey. It’s the most difficult to realize among the three, since it requires a sophisticated Artifical Intelligence system onboard the car (together with a sharp realtime sensory system). Imagine the kind of varied situations a human driver faces on a daily or periodic basis – fog, rainfall, sandstorms, snow, wet or flooded roads, humans and animals crossing the road, fender benders, and many others too numerous to enumerate. One could say that the AI system has to perform as a Vulcan; except the emotional qualities that humans possess, it should have supreme pattern-matching, logical and decision making abilities. The benefit is that once realized, a well trained AI will be able to use these abilities much more consistently than humans. People who find it difficult or are not allowed to drive – the elderly, small children and teens, those disabled or ill, will experience a new sense of freedom. Even among the rest, many of those who like driving may prefer an autonomous option in start-stop city traffic, or when away on a vacation!

Ofcourse a transformation this big is not without controversy, and intense public debates are raging, ranging from the administrative to the philosophical. While we talked about the benefits earlier, there will be several societal issues needed to be resolved. Fossil fuel extraction, refining and delivery is a gargantuan industry employing millions of people and hundreds of billions in capital. While the capital may eventually get diverted to renewables like solar and wind, people reskilling is a much messier process often with mixed results. Similarly, how do we take care of the millions of cab and other drivers who will not find a meaningful role to play among autonomous vehicles? In the medium term atleast, till power generation is dominated by coal and natural gas, aren’t the benefits of zero pollution on the roads mostly offset by the increased pollution from power plants? Even autonomous vehicles will be involved in some accidents and will have to take tough binary decisions in life-and-death situations. What are the right ‘moral’ rules an AI should follow in such times of crisis? Not to forget, how do we ensure the safety and privacy of passengers in shared cars?

All these are difficult questions with no immediate answers. While some of the problems will go away with time, eg. the pollution offset one, as the share of renewables in the energy mix increases, most answers will come through gradual consensus driven policymaking. Legislators at the highest level will have to be involved, weighing the pros and cons at each step. So the debates are healthy and we should let them continue.

What is not debatable is the fact that Shared, Electric, Autonomous (SEA) vehicles will be amazingly affordable to commute in. Let’s use a guesstimate to quantify it.

(a) Estimated selling price of a SEA vehicle in the medium term: $25000 (with most production in developing countries, and with 20% of the cost due to sensors)
(b) Average annualized interest due to debt financing: 4%
(c) Estimated useful life of the vehicle: 10 years
(d) Average driving time per day: 20 hours
(e) Average velocity while driving : 40 km/hr
(f) Days per year with reasonable demand : 300 (weekends may have much lower demand)
(g) Average passenger capacity of the vehicle: 4
(h) Average occupancy of the vehicle: 50%
(i) Average distance commuted by a passenger: 20 km
(j) Average mileage : 5 km/kwh
(k) Average maintenance costs per year: $1000
(l) Average retail cost of electricity : 10 cents/kwh
(m) Average energy efficiency (wall outlet to vehicle battery): 80%
(n) Annualized Profit Expectation over useful life: 20%

(I) Total capital + interest cost : (a)*(1+(b))^(c) = $37,000 (assuming no salvage value)
(II) Total maintenance cost over useful life: (k)*(c) = $10,000
(III) Total distance driven over useful life: (d)*(e)*(f)*(c) = 2400000 km
(IV) Total power costs over useful life: (III)/(j)*(l)/(m): $100,000
(V) Total passengers over useful life: (III)*(g)*(h)/(i): 240,000

(Not accounting for inflation or time value of money)
(Estimates done in dollars to make it easier to follow for everyone, but could be converted into your local currency, with additioonal adjustments to some factors, especially to (k) and (l))

Hence a passenger would have to pay [(1+n)^(c)]*[(I)+(II)+(IV)]/(V) or approximately $4 for a 20 km trip, or 20 cents per km. Contrast this with the going rate of $1 per km (in developed countries) and the SEA vehicle is about 5 times cheaper for the passenger. In developing countries, the going rate is around 25 cents per km, but the maintenance and power costs too are lower, hence still resulting in decent savings.

In the long run, having a business case for something is a far more potent factor in its eventual adoption than other concerns. So even if we as a society collectively decide to delay adoption (despite the benefits) through regulations, given the potential issues related to employment and stranded assets, it would be wise to recognize that entrepeneurs will keep seeking workarounds to make it happen.

The best course of action would be to plan and let the SEA gradually come in.

Will General AI dream of quantum beats?

quantum beats

The ancient Indian philosopher, Nagarjuna, was often critical of ascribing fixed positions to things. Statements could be both true and false, entities could both exist and not. In a way, impermanence and flux governed reality.

At first, such a framework is baffling in how we normally perceive the world. The sun rises every day, as it almost surely has for the past 4.5 biliion years. We do not expect the car we are travelling in to suddenly disappear, leaving us in the middle of a busy lane. When climbing a hill, we don’t think we are descending from it at the same time. Yet our everyday macroscopic intuitions don’t count for much at the atomic scales and below. Quantum mechanics governs this realm, and manifests itself in all sorts of ‘weird’ phenomena.

Things at these scales can often exist in a superposition of opposite states, up/down, left/right, particle/wave. They can ‘tunnel’ through barriers and appear on the other side, leaving the barriers intact. They can get ‘entangled’ and influence each other even after drifting apart, by potentially billions of light years, in some kind of shared destiny. If you think the last part is too out there, do read the ER=EPR paper┬áby some of the most renowned physicists.

It’s not surprising that these apparently magical powers eventually aroused the desire to harness them in computing, the latter being a pillar of our modern tech enabled society. So starting in the 1980s we embarked on a difficult path, encountering and solving numerous challenges, always hoping the destination was around the corner. The activity and excitement of the past few years suggests we are now rapidly approaching there. Starting from quantum annealers like those by D-wave, to the likes of IBM/Intel offering general purpose quantum computers (and some on the cloud no less), many experts feel it’s only a matter of time before we achieve Quantum Supremacy, an important milestone where classical computing will then find it too difficult to catch up.

So what does that have to do with AI, that other great force, and one already pervading our society? A bit of history is again in order. In the 1950s, when Minsky and collaborators came together to simulate intelligence in machines, there was a great sense of optimism that advances would come about rapidly and in short order. While there was progress, the expectations were much higher and the field went through a lull dubbed the ‘AI Winter‘. Determined researchers continued, discovering the required algorithms, putting the foundations in place. But lacking the heavy computational power needed, the mainstream remained relatively unaware of what was brewing up.

Come the 2000s, computational power started catching up, and the accuracy of then existing models saturated. Deep learning, the art of training many layered neural networks took off, and by now it has been so successful, it has led to new multi-billion dollar industries like voice assistants, chatbots, surveillance analytics, and what not. At the marquee image/speech/text mining competitions held annually and participated by the stalwarts, the question is not whether but which new variant of a deep neural network will win.

That said, it is clear that the CNNs and RNNs of the world are still far from general human cognition. While GANs and Wavenets can be said to demonstrate creativity in terms of the dreamlike images and music they conjure, the signal to noise ratio is low, and humans have to filter and curate their output. While AlphaGo did come up with unexpected moves to beat Lee Sudol, the former can’t yet think beyond the grid it is trained on, and apply its skills to other kinds of board games. The current models are still quite dependent on a pre-defined ‘frame’ and the data they have seen, and can’t function well outside it. Artificial General Intelligence (AGI) remains a distant goal.

AGI remains the holy grail. About a month back, I decided to have an extended coversation with Mitsuku, one of the most advanced chatbots. While it was impressive in that it could give a legitimate response and hold a conversation, its abstract thinking and philosophizing skills will probably take a lot more time to develop. An AGI (maybe embodied in a robot), which can talk, walk and work among us, just like us, is not on the horizon yet. It’s possible that an advanced model architecture which can intake images, audio, text and other sensory information simultaneously (similar to our brains) will be a lot more powerful and human-like than the current ones which can only deal with a single kind of input. But there are alternate views on how to get there.

Some eminent thinkers like Roger Penrose have proposed that consciousness, probably the most human of all qualities, depends on quantum mechanical processes in the brain, and hence may not be replicable by determinstic models like the ones we mostly use (while training a model we do use randomization, but while making predictions, the weights are freezed and the model gives the same output to the same input). A brain with QM activity within could potentially contribute to explanations of many phenomena – why do we seem to have free will? how do we manage to link often unrelated concepts, especially in dreams? maybe even why meditation calms us down through the modulation of EEG waves!!

There have been several critiques about such proposals. Chiefly, that the environments in our brains are too warm and chaotic for quantum entanglement to survive for meaningful lengths of time. The frequencies at which our brains operates are much slower than potential decoherence times in structures like microtubules and Posner molecules. While the debate rages on and and is quite an interesting one, I think a larger point remains unaddressed.

Recently, computational complexity researchers proved there are problems in BQP (class of problems that can be solved efficiently by quantum computers) which are not present in PH (class of problems which can either be solved or atleast verified efficiently by classical computers). Hence, it’s likely that quantum computers can do everything that classical ones can and more, and the reverse is known to be not true. When people talk about AGI, the discussion often veers around reaching the level of human abilities, but GAI need not stop there. In fact it’s likely that AGI will eventually surpass us and evolve into superhuman beings. Assuming for the moment that classical computing is enough to simulate our brains, a version of superhuman AGI that uses quantum computing or a mix of both quantum and classical is bound to be much more powerful than one which relies only on the latter.

Superhuman AGI will have a pretty strong incentive to figure out how to go quantum if it isn’t already, and will keep striving towards it till it manages to do so. And when it has free time, it is bound to enjoy the endless possibilities of quantum music far more than that of its classical cousins.