Concept paper

Alternative to the "AI" concept: FCM, First-Hand Computerised Mentality, and Ethical Robots Making

By Aristo Tacoma

Background. The desire, which in part makes sense, to construct computer-driven robots that take over heavy, dangerous and/or boring tasks from human beings, has led to a number of programming approaches that involve a mimicking of some features of human pattern recognition and human learning and human task solving. It is however clear that a number of studies in the philosophy of logic, in the wake of such as Kurt Goedel's famous Second Incompleteness theorem, and in the philosophy of mind, physics and biology, there are fundamental properties given to mind not given to machines (not even the machines made in the wake of the most advanced form of quantum theory): we will not give a resume of these here, but only point out that for any one of deep interest, there is enough material of this nature for many  months, at least, of whole-time study (including much from this writer). Put simply, the project of enabling robots to have some human-like features cannot be subsummarized under such an over-the-limit concept as "Artificial Intelligence", or "AI". Rather, AI is a slogan that is more proper for advertisement companies than for serious thinkers: it is a hyped, hubris-rich concept with little serious scientific credibility of any sort, unless the notion of 'intelligence' is made small and mediocre--and this is not the approach of the philosopher nor of the serious scientist. But, then, what is the better concept? We can speak of 'robotics', but that is a somewhat more hardware-oriented word. What is the proper software content of an advanced robot? We propose here, in attunement with the proposals in the Firth platform from 2006--where also "first-hand" is used in connection to programming--and consistent with other proposals later on, where we also speak of FCM--to call the field of software in advanced robots as explorations of FIRST-HAND COMPUTERISED MENTALITY, with acronym "FCM". A brief explanation follows. In a foundational G15 PMN app unit {3rd Foundation}, we are exploring how FCM can be initiated using the G15 PMN programming language, with the particular high-powered PMN Terminal summarising all the advanced functions in the G15 platform.

  The notion of first-hand means that the human minds of the programmer and programmers are engaged in a relationship, first-hand and not behind a 'statistical screen', with the data and algorithms involved in each program. This is necessarily a question of degree: total understanding of every bit of every algorithm at all times is unrealistic, and unnecessary. But to understand every bit in principle, and all key parts in praxis--at least by looking into the code and its comments--is realistic, when the programming language as a whole is keyed to the human mind and is well-defined enough within a meaningful digital context.

  When we speak of FCM, or First-Hand Computerised Mentality, we are investing a first-hand program with features that, although entirely algorithmic, digital, and bound to meaningful numbers of well-known sizes (in G15 PMN, the approach is to embrace 32-bit as the paradigm of all good programming because of the vast possibilities contained within this still-meaningfully sized number range--in contrast to the absurdly long numbers of higher bits such as 64, 128, 256 or 512),--it also has features that are inspired by some abstract features of the living mind. But these are implemented in a way that is responsibly limited: the mentality of the programmer and programmers is involved in avoiding any projection of an image of 'intelligence' as such, for 'general intelligence', or just 'intelligence', is a feature involving a living perceptual capacity connected to questions of infiniteness, and not what we can meaningfully and scientifically include in any machine-performable digital program, no matter of what kind. The mentality of the programmers is also involved in that the program is seen as an expression of this mentality--rather as a great literary work is an expression of the mentality of the novelist. The human beings who interact with a robot containing the FCM, or with the FCM steering software-units of a vaguely robotic kind on a normal PC, do meet a bit of the mentality of the programmers as encapsulated in context-specific robotic programs. This includes also limits of use of the FCM, and, for suitably advanced forms of FCM, it includes also ethical priorities in the FCM.

  A FCM program will stay within the boundaries of a context-specific enough set of algorithms. These algorithms will in general involve some data-set, often organised, probably, as a matrix. The FCM doesn't automatically change these data-sets uncontrollably. Rather, an FCM program can be put in a well-controlled 'learning mode'. The words used by the FCM program about itself are moderate--for instance, any psychological word like "learning" should be put in quotes, and, where possible, more technical-sounding words should be used instead. For instance, 'entraining' a matrix is a better word than 'training'; and 'pattern matching' is a better word for a program than 'pattern recognition'. Also, the word 'action' sounds like something done by living being whereas the word 'task' is more neutral--and so on. This, then, shows awareness in the constructors of the FCM program and this awareness expresses itself through the FCM program by the virtue of modesty. It is part of this modesty that the FCM program is clear about what contexts it is made for. FCM isn't made for all contexts. It is bound up to some contexts, and these must be well-defined enough. Where there are ethical priorities involved--for instance, an FCM robot soldering an electronics circuit in a lab where also human beings are around must have (as the scifi writer Isaac Asimov pointed out) a first priority or law in not harming human beings--these priorities cannot be 'unlearned'. The data input to a matrix being entrained must concern a limited area of its functionality. It is part of this necessary modesty of a good FCM program in an advanced robot that these robots are not set up to produce new FCM programs or robots: for that could undo the priorities and make them do things outside of the human-decided proper ethical context.

  In order for the FCM program to have well-functioning fixed priorities the starting-point of the setup of the program and its matrixes must have features of simplicity and first-hand obviousness, at least to advanced programmers. The part of the FCM that involves data that has been entrained by 'entrainment sessions' must not touch the core priorities built into the FCM. Also, such sessions should be turned off; the data are then checked; if they are good, they are put to use--and only a very moderate degree of further data input happens when the FCM program or robot is in actual use. Obviously, some data input is necessary in the day-to-day performance of an FCM robot, and this may well take place by analogy to the entrainment phase. But the entrainment phase goes deeper, and has human beings who are giving indications as to how correct or incorrect an action is as the FCM robot, perhaps via some Relatively Free Fluctuation Generators (RFFG), is engaged in tentative task-patterns. These indications inform the matrix, and is part of how the mentality of the programmer becomes part of the whole setup of the FCM program.

  When the entrainment has taken place, the human beings overseeing the FCM program or robot must check the entrained data sets in a variety of contexts. The entrainment mode is switched off, and the FCM is then put to practical use,--but it must then have additional possibilities of being regulated when it makes mistakes, as it will, since any environment can have unexpected features relative to an entrained context-dependent set of inputs. It is part of the ethical constraints of robot building that a robot should look like a robot, not like a living being; that it has several easy-to-access buttons to switch off actions that may be inappropriate, and that it has numerous software elements in it all oriented towards turning itself off when there are any signals in the environment that things aren't as laid out in the original scheme for it; and that programmers are regularly overseeing it.

  Put simply, FCM is the approach of applying good common sense and the quality of good sound reasoning so as to make the best of computer technology when it is expanded with motors and sensors, without engaging the slightest in any hubris as to what it is all about. And let us be clear that while many tasks in dangerous (eg toxic) contexts can with advantage be done by robots, as well as tasks requiring extreme precision (eg making of tiny semiconductor devices) or extreme repetitiousness (eg making of some parts of cars), it is part of the honoring of human beings as beyond all machines that we do not overzealously try and apply this all over the place. Thus, for instance, we do not ALWAYS want coffee or tea exclusively served by machines, but also quite often by a nice human being who gets paid for doing that job. Robots are meant to make things easier for humanity--the word 'robot', indeed, derives from a Czech word related to 'slave-work'--and we do not want to see a situation in which the majority of pleasant, valuable, good jobs are handed over to algorithm-driven machines. We programmers must curtail over-enthusiasm as to the application of our algorithms. The human mind, here as in all other contexts, must be put first.

Written in November 2015