It could be that without a pre-mortem, nobody seriously considered what could go awry. The pre-mortem could save you from a big problem. Generally, though, the notion is that you want the project to succeed, and the pre-mortem helps to further guarantee or at least help assure that it will do so. In recap, you do a pre-mortem by first trying to identify potentially adverse outcomes, and then you work backward through the steps of the project to try and ascertain how such an outcome could occur.
When you find where it could occur, you would then reconfigure the project so that it will either avoid that bad outcome or otherwise try to mitigate its chances of occurring. In terms of identifying potentially adverse outcomes, some critics of the pre-mortem say that you could spend forever coming up with a zillion bad outcomes. I am going to invent a new toothbrush. This seems like a reasonable kind of bad outcome, meaning that it is something that we all could reasonably agree could go wrong. Suppose someone says that the new toothbrush could have as an outcome that it causes cancer.
I dare say that this seems rather farfetched. Even if you can come up with something oddball to cover it the toothbrush is made of cancerous materials , it is really a bit out there in terms of a reasonable kind of adverse outcome. Therefore, I always try to brainstorm for what seem like reasonably reasoned bad outcomes. We might list a comprehensive bunch of bad outcomes, and then review the list for reasonableness. I had one executive that was irked when we suggested that one bad outcome could be that a new system being developed for a major project could create a security hole in their massive database and allow hackers to get into it.
It was such an emotionally charged outcome that he refused to look at the outcome in any impartial manner. Sometimes a pre-mortem needs a delicate hand to get it to occur well. At the Cybernetic Self-Driving Car Institute, we make use of the pre-mortem for our AI development efforts and we urge auto makers and tech firms that are also making AI software for self-driving cars to do the same. This diagram shows some important systems development processes.
Like most dev shops these days, we are using agile methods and so this portrayal of the classic method is somewhat of a simplification but it gets across the overall points that I want to make. This is the part of the systems development process that involves trying to find errors or issues, and then resolving them.
In that case, the system gets fielded with a hidden error or issue embedded inside it. For your everyday systems like an online dating system, if an error arises it might not be especially life threatening though maybe it pairs you with the worst date ever. For AI self-driving cars, since they are life-and-death systems, the testing and the error fixing needs to be extremely rigorous.
Some firms are rigorous in this, some are not. Estimates are that most of the self-driving car software consists of millions of lines of code. I assure you that the code is not going to be perfect. It will have imperfections, for sure. I am sure that some of you are howling that even if there is an error or issue, it can be readily fixed by an OTA Over The Air update to the self-driving car. But, meanwhile, I ask you, if the error or issue has to do with say preventing the self-driving car from smacking into a wall, and suppose this actually happens, what then?
Sure, if the auto maker or tech firm later finds the error or issue, it can do an update to all such self-driving cars, assuming that the OTA is working properly and that the self-driving cars are doing their OTA updates. Nonetheless, we still have a dead person or people due to the error or issue, and maybe even more deaths until the error or issue is figured out and fixed. In Figure 1, you can see the process for doing a post-mortem of the system. You start with whatever you know about what actually happened. You then usually will go into the code of the system to try and figure out how it could have led to the adverse outcome.
This might also get you to relook at the design of the system. It could be that the error or issue is some isolated bug, or it could be that the system design itself was flawed and so it is a larger matter than seemingly just changing some code. Based on whatever we can discover about the incident, the next step in the post-mortem involves searching in the AI system to try and figure out what led to the self-driving car willingly going into the wall.
This might involve code inspection.
- The Trials of Lance Eliot;
- Legend of Zodiac - Fall of Darkness!
- The First Herald Angel;
- Ghost Date At Jericho Heights.
It might involve examining neural networks being used by the AI. The question arises as to whether whatever we find could have been possibly found sooner. As shown in Figure 1, the pre-mortem might have led to discovering whatever the error or issue is, and had we found it during the pre-mortem it might have been corrected prior to the AI self-driving car being fielded.
The pre-mortem process is quite similar to the post-mortem process. You begin with an adverse outcome. For the post-mortem, you usually are first looking into the guts of the system, and then depending upon what you find, you then take a look at the overall design. For a pre-mortem, we typically look at the design first, trying to find a means that the design itself could allow for the adverse outcome.
If we find something amiss in the design, then it requires fixing the design and fixing whatever code or system elements are based on the design. Even if we cannot discern any means for the design to produce the adverse outcome, we still need to look at the code and the guts of the system, since it is feasible that the system itself has an error or issue that is otherwise not reflected in the design.
When looking for culprits in either the guts of the AI system or in the design, you would usually do so based on the overarching architecture of the AI system that was developed for the self-driving car.
The Trials of Lance Eliot (The Eliot Papers, #1) by M.L. Brown
This usually consists of at least five major system components, namely the sensors, the sensor fusion, the virtual world model, the AI action plan, and the controls activation. The sensors provide data about the world surrounding the self-driving car.
There is software that collects data from the sensors and tries to interpret the data. This is then fed into the sensor fusion component, which takes the various sensory data and tries to figure out how to best combine it, dealing with some data that is bad, some data that conflicts with other data, and so on. The sensor fusion then leads into updating of the virtual world model. The virtual world model provides a point-in-time indication of the overall status of the self-driving car and its surroundings, as based on the inputs from the sensors and the sensor fusion.
See a Problem?
The AI then creates an action plan of what the self-driving car should do next, and sends commands via the control activation to the car driving controls. This might include commands to brake, or to speed-up, or to turn, etc. If we were trying to figure out why the self-driving car ran smack into a wall, the first approach would be to try and find a single culprit.
Maybe the sensor fusion had an error and thus misled the rest of the AI error E It could be that the virtual world model has an error or issue E Or it could be that the AI action plan contained an error or issue E Or it could be that the control activation has some kind of error or issue E Sometimes the culprit might indeed be a single culprit. This though is often not the case, and it might be that multiple elements were involved.
The nature of the AI of the self-driving car is that it is a quite complex system. There are numerous portions and lots of interconnections. During normal testing, while in system development, many of the single culprit errors or issues are more likely to be found. The tougher ones, the errors or issues involving multiple elements, those are harder to find. Furthermore, some development teams get worn out testing, or use up whatever testing time or resources they had, and so trying to find really obscure errors or issues is often not in the cards.
Rather than focusing on a single culprit, the next level of analyzing would be to look for the double culprit circumstance. In Figure 3, you can see that there are situations where the error or issue might be found within both the sensors and the sensor fusion error E You can have a situation where an error in one component happens to cause an error in a second component to arise. In other words, the second error would not have been found, except for the fact that the first component had an error.
- Rhyns Redemption (Rhyn Trilogy Book 3);
- IBM V Download WebSphere Application Server Version from Passport Advantage Online.
- Pre-Mortem Analysis for AI Self-Driving Cars!
- More Summer Reading: 100 Best Artificial Intelligence Books of All Time?
- Supreme Court of Virginia Opinions.
The two errors might not be directly related to each other. They might have been developed completely separately. If you have a developer Joe, and he made an error in the first component, and if he is someone that is error prone as a developer, and if he worked also on the second component, you might well have an error in the second component too. What kind of adverse outcomes should you be considering for an AI self-driving car?
As shown in Figure 4, there are adverse outcomes that are directly caused by the self-driving car. The AI self-driving car might hit a non-car non-human object, such as a tree, a wall, a fire hydrant, and other such objects. You would want to postulate this happening, as a predicted adverse outcome, and try to walk back through the AI and the self-driving car system, in order to try and detect how this could possibly happen. When they then applied visible light via irradiation, the closed state reverted back to an open state.
In other words, the molecule could be open and closed, representing two states, akin to our desire to have an ability to represent 1 and 0, via the dosing of the molecule with in this instance variants of light. The mechanistic details of why this molecule performs this way is still not completely understood. It has many aspects or qualities that make it attractive for this purpose, including that it for example operates well at room temperature, while other molecules that have been tested for this switch have at times required near absolute zero freezing temperatures requiring therefore cryogenics in order to function properly.
The chromophore diarylethene molecule though might degrade over time and not persistently perform. Take a look at Figure 5 for a handy list of some of the characteristics we want to consider for any molecular switch. Does the molecule provide for the kind of electrical conductivity that we want to have? Does it dissipate only a modest amount of heat or a lot of heat the more heat then the more complicated things become to get the heat dissipated and not otherwise harm the rest of the switch and surrounding mechanisms?
Which molecules and which atoms will be involved for example are they hard molecules to create and involve obscure atoms, or are they more commonly and readily available? How long is the molecule and how much space is needed for the junction area? Can the molecule be made to switch states such that we can ultimately represent a binary value? Will the molecule remain stable over time or somehow begin to alter or fragment? Does the molecule work at room temperature or at special temperatures needed such as via cryogenic means?
AI and Nanotechnology: Molecular Electronics and the Latest in Single-Molecule Switches
Can we readily connect the molecular switch to conventional silicon semiconductors? Can we readily connect the molecular switch to other molecular switches, in addition to connecting to conventional silicon semiconductors?
Manufacturing aspects. Can the molecular switch be manufactured in a cost effective manner? These aspects about the implementation of a molecular switch are revealing as to the complexity involved in trying to discover a molecular electronic approach that will be viable. Finding molecules that meet the multitude of desired properties is tough and often a trial-and-error kind of laborious research chore. Researchers in this molecular electronics realm are combining electronics expertise with physics expertise, chemistry expertise, materials science expertise, and a slew of other highly technical specialties.
It is a hard problem to solve. Some say that molecular electronics will ultimately be the death of silicon. Atomic-scale architecture is exciting and offers a huge promise for sharply reducing the size of computers and for speeding them up. We need this not only for playing games on our mobile devices, but more seriously we need it for the advances taking place in AI that will provide computer-based devices that can respond to us in natural language processing capabilities and in other intelligent-like ways. There is a famous early breakthrough in the nanotechnology realm that occurred back on September 28, At IBM, researcher Don Eigler was able to use a move and control an individual atom, the first person to ever do so.
These sizes are measured in terms of nanometers — a billionth of a meter. Take a look at Figure 1. I have added two arrows at the upper right of the graph. The red arrow shows the line potentially flattening out. Both camps are racing in their respective top-down versus bottoms-up approaches. To give you a better sense of size, take a look at Figure 2.
The Trials of Lance Eliot (The Eliot Papers, #1)
An ant is quite smaller, coming in at about 5 million nanometers in height. The height of a sheet of paper is even smaller at around , nanometers. Take a look at Figure 3. There are facets to this use of a molecular structure that are worthwhile to further consider. Take a look at Figure 4. To get values into and out of the molecule, we need a means to connect to the molecule.
Does it provide a predictable electronic behavior over time? Is it a smaller sized molecule or a larger sized one? What are the patterns of molecular bonding involved?