AI and the transformation of knowing
The development of human knowledge is very much a tale of tools. Tools as extensions of the human mind has been transforming human practices for millennia. Now the digital transformation has sparked a revolution where we still might see most of the change to happen in decades ahead. At this juncture we can identify an interesting shift in one of the many dimensions constituting the relation between humans and technology.
Physical and intellectual instruments (even many of the digital ones) functioning as mediational means in the service of human activities have, traditionally, established a sort of stable relationship between the user and her task (at an ontogenetical level). This stability or predictability stems from a basic form of ignorance held by the artificial.
For example, the modern power drill enables me to accomplish many things that would be hard to do purely by manual labor. But I still have to learn how to best use the tool and to choose a suitable drill bit according to what materials I’m working with. In a similar fashion, the spell checking happening in my word processor helps in getting the words right. However, so far, it has not acknowledged my changing skills in the langue nor has it taken into account for what purposes a specific text is being written. Such artifacts are generally not context dependent, they aren’t altering theirbehavior in response to their anticipation and analysis of whatIam doing.
This, in turn, necessitates mastery in their use—A combination that has proved most successful. The very idea that a competent user wielding a powerful technology has been key in the proliferation of the human species is a central underpinning of the socio-cultural-historical theory. We could summarize this picture by saying that:
In the old world, the tools, as servants, were blind to the needs of their masters.
Looking ahead, what happens when the technologies start to anticipate my actions and alter their operations based on such assumptions? We can introduce a though experiment to clarify this idea by departing from Gregory Bateson’s discussion of the blind man and his stick:
[Consider] a blind man with a stick. Where does the blind man’s self begin? At the tip of the stick? At the handle of the stick? Or at some point halfway up the stick? These questions are nonsense, because the stick is a pathway along which differences are transmitted under transformation, so that to draw a delimiting line across this pathway is to cut off a part of the systemic circuit which determines the blind man’s locomotion. (Steps to an Ecology of Mind, 1972)
In Bateson’s example the stick in question is simply a “dumb”-stick that does nothing but affords a pathway that carries along vibrations between the ground and the blind man. We could however envision a next version of such a stick. Perhaps a “smart”-stick would start to learn about its master’s preferences. Gradually it builds a model separating the tactile forms of feedback generated by hard surfaces from the soft forms provided by the roadside. It can also extrapolate the blind man’s clear preference for one type over the other.
But what if the stick itself could also alter its shape so as to translate the “soft” feedback into “hard” one? Then it could adapt the presentation of information so that it reflects its user’s preferences, and not simply transmitting whatever surfaces it encounters. In this simple example we can easily grasp that such a development would lead to disaster and that a stick of that ilk would be of no use.
But can we always be so sure of other implementations that adapt their presentation of information, or change the way they operate, according to whatever assumptions they make of what the user needs? Do we even know when this happens? And when implemented how should such technology-held assumptions be communicated?