Thread: Scaling Intelligence in an AI-dominated Future

  1. #1

    Scaling Intelligence in an AI-dominated Future

    An article from the Buddhist Door website:

    Scaling Intelligence in an AI-dominated Future

    By Shaelyn McHugh

    β€œIt seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. . . . They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.” – Alan Turing.

    How seriously should we be taking artificial intelligence (AI) as a threat to humankind? The image that comes to mind when we think of AI tends to be from movies or media in which robots gain consciousness, turn evil, and exact revenge against their human creators. It is hard to objectify what robots are today when they exist, to most of us, in science fiction.

    Artificial intelligence is a simulation, done mostly by computer systems to perform tasks normally requiring human capabilities, including speech recognition, decision making, and language translation. AI is already among us: it allows us to send an email, to use Internet search engines, or to make purchases with a credit card. If AI systems stopped working tomorrow, many societies would fall apart.

    Computing power already supplements and provides shortcuts for our natural lives. From voice memo texts in place of face-to-face communication, map applications instead of manual navigation, and starting intimate relationships through online dating services, many of us can no longer entirely separate ourselves from technology. We are inextricably linked, blurring the line between human and machine.


    Continues (with diagrams) at the link:

    https://www.buddhistdoor.net/feature...minated-future

    Any comments about the article?

  2. #2
    Forums Member
    Join Date
    Mar 2017
    Location
    UK
    Posts
    385
    An excellent article about a subject we should all be aware of. Not only that but any thinking about how the mind works is useful to Buddhists when, after all, we are concerned with changing how ours work, or at least influencing the outcomes of the working of our minds.

  3. #3
    Technical Administrator woodscooter's Avatar
    Location
    London UK
    Posts
    1,571
    This very good article is looking at the future situation when machine intelligence becomes equal to the level of human intelligence, and then takes a step to exceed it.

    Even now, AI is responsible for some engineering design which beats the best that the human mind can produce. A recent example which I came across is the design of a screen for fitting inside A320 aircraft, creating a partition between cabins. The screen has to meet certain size constraints, needs to be structurally rigid and at the same time be as light as possible.

    The screen designed by AI met all the requirements and turned out lighter than any other attempt. Also, it looked unlike any engineer's concept. The metal supporting structure was closer to nature, looking like it had evolved. You may be able to find a picture here: https://www.autodesk.com/customer-stories/airbus.

    Commentary says "The algorithm for the partition frame was based on the growth patterns of slime mold" and "The algorithm for the structure within the partition frame was based on the grid structures of mammal bone growth". The final panel was 45% lighter than a conventional design.

    The next big question is one of ethics. AI doesn't patiently explain its thinking to us, it just implacably discards intermediate solutions that don't match what it's been programmed to do. Two major ethical problems arise. One is, obviously, how do we identify bias in setting the algorithm? In other words, who decides how to instruct the computer to look for a solution?

    The other problem is what shall we do when the solution seems to fly in the face of our culture? Here's a rather trite example to show what I mean. Install an AI system to run a cemetery with maximum efficiency. The AI decides that burials must be done with the coffins upright, as that packs far more into the available space. At some point, human judgement has to intervene, and say "Stop. That's not good".

  4. #4
    Forums Member KathyLauren's Avatar
    Join Date
    Jul 2018
    Location
    Nova Scotia
    Posts
    54
    Quote Originally Posted by woodscooter View Post
    The other problem is what shall we do when the solution seems to fly in the face of our culture? Here's a rather trite example to show what I mean. Install an AI system to run a cemetery with maximum efficiency. The AI decides that burials must be done with the coffins upright, as that packs far more into the available space. At some point, human judgement has to intervene, and say "Stop. That's not good".
    I see your point in principle, but I am not sure that that is a good example. That would seem to me to be a remarkably good solution. As part of the process of reviewing the results of an AI-based solution, we need to be aware of the biases that we bring to that judgement, and the reasons why we would emotionally prefer a less-than-ideal solution over a good one.

    You are not really asking how we identify biases that have crept into the AI process. That suggests that you are trying to keep biases out. You are really asking how we can ensure that our biases have been incorporated, rather than someone else's.

    Om mani padme hum
    Kathy

  5. #5
    Forums Member
    Join Date
    Mar 2017
    Location
    UK
    Posts
    385
    I think it's interesting that human beings will not be designing the finished programming in any intelligent, conscious AI. They will probably be the result of learning algorithms, so that the finished AI will be the result of a self-developmental process. Those AIs can then develop learning processes to improve those that we set up, and so on. Any resulting consciousness may be very different from any possible human design specifications.

  6. #6
    Technical Administrator woodscooter's Avatar
    Location
    London UK
    Posts
    1,571
    You make a good point, Kathy, and I'm pleased to have the opportunity to discuss the issue further with you.

    The programmers will have cultural bias, including some unconscious bias, which will undoubtedly become part of the AI software. The teaching/learning process in AI can also introduce bias.

    I have a friend who works for a product testing service. His company wanted to develop a system to distinguish between original products and counterfeit ones. The AI computer was given a large number of photographs of fake products seized by customs and trading authorities, and a number of photographs of the real thing, taken in the company's own warehouse. The system was put into operation, but it turned out that it labelled all goods as counterfeit except ones photographed in a warehouse with rows of cartons stacked on shelves behind.

    Bias can be built into the decision making process and much of the time we won't see it until it's too late. Bias can be based on age, gender, colour and demographic.

    Ethics will also be built into the algorithms of AI, and the ethical questions are equally tough to consider. We are beginning to have self-driving cars. Some accidents and a few fatalities have already occurred. Who's responsible? The car passenger? The car supplier? The AI programmers?

    Consider the Trolley Problem, a well-known idea in moral philosophy:

    You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the main track. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

    1 .Do nothing and allow the trolley to kill the five people on the main track.
    2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

    Which is the more ethical option? Or, more simply: What is the right thing to do?
    If you pull the lever, diverting the trolley, "only" one person will be killed. If you do nothing, five will surely die.

    But if you pull the lever, you are intentionally causing the death of a person and you have an active part in the tragedy.

    Alternatively, by doing nothing, no-one is responsible.

    In the case of the self-driving car, how should it be programmed to react when a collision seems inevitable?

  7. #7
    Forums Member KathyLauren's Avatar
    Join Date
    Jul 2018
    Location
    Nova Scotia
    Posts
    54
    Yes, there are biases. Some will be intentional and some will be accidental, like with the background-matching counterfeit detector. The question is how do we ensure that the accidental biases are ones we can live with, and the intentional ones are ours and not someone else's. Which, of course raises another moral dilemma: us vs. them, and all the problem that can cause.

    I don't see the relevance of the trolley problem. Yes, it illustrates that there are ethical dilemmas. I think we all know that already. Other than that point, I don't see the relevance to the discussion.

    BTW, the trolley problem is not a difficult problem to address. I will pull the switch to send the trolley onto the track with one person. I am not responsible for the death. The responsible party is the one who set the trolley in motion and who tied the people to the tracks. My only choice, and therefore the only thing I am responsible for, is one or five. Of the two options, only one death is preferable. The best option of no deaths was not available to me, and the culpable person is the one who denied me that option.

    As with many moral dilemmas, there is no "doing nothing". Choosing not to pull the lever is not "doing nothing". It is an active choice, in this case, to let five people die. Similar to: "If you are neutral in situations of injustice, you have chosen the side of the oppressor. "

    I don't have a lot of time for ivory tower sophistry.

    Om mani padme hum
    Kathy
    Last edited by KathyLauren; 21 Aug 19 at 16:41.

  8. #8
    Technical Administrator woodscooter's Avatar
    Location
    London UK
    Posts
    1,571
    Kathy, your response to the trolley problem is perfectly fair. The observer is involved, and doing nothing is a choice. One death is better than five deaths. Someone else set up the situation and they must be culpable.

    The trolley problem is relevant, in my view. You didn't address my question of the self-driving car, which is the reason why I introduced the rather simplified analogy of the trolley.

    Ethics will also be built into the algorithms of AI, and the ethical questions are equally tough to consider. We are beginning to have self-driving cars. Some accidents and a few fatalities have already occurred. Who's responsible? The car passenger? The car supplier? The AI programmers?

    In the case of the self-driving car, how should it be programmed to react when a collision seems inevitable?
    I'm inclined to take the view that someone else invented the self-driving car and programmed it, so 'someone-else' is responsible for any damage or fatality caused. I think that the courts prefer to blame the person who set the car in motion and who is supposed to be able to intervene in emergency situations.

    Ethics should be defined by us, the people. Ethics gradually change over the decades and the centuries, but they should at any moment in time be acceptable to the majority of the people at that time.

    I don't think we've yet had much public debate about what we expect from self-driving cars, and even less thought has been given to the moderation and curation of AI in general.

Los Angeles Mexico City London Colombo Kuala Lumpur Sydney
Sat, 8:23 PM Sat, 10:23 PM Sun, 4:23 AM Sun, 8:53 AM Sun, 11:23 AM Sun, 1:23 PM