Natural language is difficult to formalize, yet there are some interesting reflective properties when you construct the grammar around activities, for example this sentence:
tell Albus that Beatrix will assassinate Harry
\____________________________/
Beatrix will assassinate Harry
is a hypothetical activity taking place in space and time. We could attach more information, like “why”, “how”, “when”. Notice how these mutations changes the meaning:
tell Albus why Beatrix will assassinate Harry
tell Albus how Beatrix will assassinate Harry
tell Albus when Beatrix will assassinate Harry
\____________________________/
Or:
tell Albus that Beatrix will assassinate Harry because she hates him (why)
tell Albus that Beatrix will assassinate Harry with a knife (how)
tell Albus that Beatrix will assassinate Harry tomorrow (when)
\_____________________________________/
What is interesting is how people never confuse the message with which activity to perform, and they even know whether they are able to perform it. They can answers questions like “do you know what the message means” and “how do you think about the consequences”.
Imagine how an experience of interacting with computers this way would be like!
In this case the message is a command to deliver another message. The message to be delivered is about a future activity by the word “will”, which gets replaced with “did” when the activity has taken place. For example, if somebody tells you this, and you hear somebody has assassinated Harry, then you might believe that Harry was assassinated by Beatrix.
Imagine a game where you trick non-player characters to believe Beatrix killed Harry by starting a rumour.
How does this work? A few days ago I realized something important:
The meaning of sentences are context based about the assumed outer activity.
With other words, knowing what kind of activity you are participating in determines natural language semantics. Seems obvious, right? But, thinking of this you might start notice that natural language is built around describing outer activities to communicate clearly in a reflective way.
For example, if somebody tells us this message, if we think about it then we think:
X does tell me to tell Albus that Beatrix will assassinate Harry
\____________________________/
\____________________________________________/
and you might notice something weird: We can make a judgement whether X is a trustful person
or not, as if we had an internal rule controlling our behavior and where trustful person
somehow is important and related to what we hear.
We can model the predicted behavior as an internal thought:
if trustful person tell me that I should tell X that Y then I shall tell X that Y
\____________________/ \___________________/
\_________________________________________________/
Btw, I have been thinking a lot about this lately. Sometimes, it seems people follow simple rules without realizing it. They might never know that their behavior can be predicted by simple rules, and that perhaps these rules not even always makes sense!
For example, this ought to be a rule only for exceptional cases, but our brains seems to be hardwired to it:
if trustful person does tell me Something
then I shall believe that "Something" is right
A scientist would jump in the air and shout: “No, you should perform an experiment to determine whether it is wrong, not believe people even you believe they are trustful, because even well meaning people are wrong sometimes”. This is the core of the scientific philosophy, to filter out errors caused by people believing something is right without noticing they were just told it.
Something to think about: Could learning of language be hard wired to certain activities, like judging who are trustful? Or, is the form of judgement related behavior learned after acquiring language?
A more careful and flexible rule could be:
if P does tell me that I shall tell X that Y then
if I do not believe that "Y" is wrong then
I shall tell X that Y
else if I believe that Y is wrong &
I believe that "I tell P that \"Y\" is wrong" is safe then
I can tell P that I believe that "Y" is wrong
else
I can distract P &
I shall move to safety
The meaning of “shall” is related to goal oriented behavior. For example, if you missed the chance to tell a person something, because “shall” turns to “should” and indicate guilt if you do not do it. Later, if something bad happens as a result of the lack of warning, one could tell the person:
I am sorry Albus that I could not tell you that Beatrix would assassinate Harry
\_____________________________/
\_______________________________________________________/
Sometimes what people are saying are simple mutations of the sentences they hear, according to some relative simple rules, but that depends on some complex activity taking place in the world. You might see how this give rise to complex behavior, not because complexity of language, but because people can find themselves in an enormous number of different situations.
In normal programming, you only have one mode of interpreting sentences. If it says so, then the computer does it. It is highly unusual for computers to doubt the source code, or apologize for not doing something they are told to do because turning off the cooling to the antimatter reaction chamber might be very unsafe for the company of people on board the Enterprise. Yet, this could be important in some applications.
One can imagine computer programs that mediate between people, programmed with simple rules. They might not have much knowledge about the world, but they could be aware of which people they are talking to. We can ask it whether it knows who it is talking to, and whether it knows how to tell somebody else. The mathematical properties of a such language are very practical. For example, by hard coded rules of thoughts about the interaction between humans and machine, if it can not, then it knows that it can not, and it can express this as a thought to the user.
More, imagine a programming language written in natural language that instructs the computer with new behavior. It could use simple rules that can simulate a wide variety of behavior. People could study how these computers behave and use them for proving theorems in natural language. They could use it to test what a computer would believe in a simulated situation of emergency, when safety matters!