Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity. -- Charles Mingus

Monday, June 29, 2015

Heterogeneous Reasoning and the Openbox

We call reasoning that involves using multiple representations, heterogeneous.

It is a little odd that we have to qualify reasoning as heterogeneous, since this is the typical case.  When you are finding your way to a party at my house, you might combine my address (textual information) with a map (diagrammatic information) to decide on a route to take.  When you decide to take an umbrella after looking out of the window and  listening to the weather forecast,  you are likewise reasoning heterogeneously.  We do it all the time.

Conversely, homogeneous reasoning is pretty rare.  Mathematicians do it all the time, sure, but they aren't typical reasoners.  And of course you're asked to do it in most logic classes, which as we all know are excellent preparation for real life reasoning.

Anyway, one of the things that I am interested in is heterogeneous reasoning, and as part of the Openproof project, we have implemented the Openbox, which is a computational framework for exploring heterogeneous reasoning.

The idea behind Openbox is that it provides a representation-free proof environment.  It allows you to create proofs, adding steps, performing citation, saving and opening proof files, etc; but it contains no representations to put in the steps.  These have to be provided later.

Our application Fitch (part of the Language, Proof and Logic courseware package) is built in exactly this way.  When we add the first-order logic representation to the Openbox, we get a proof editor whose steps contain first-order sentences.  Of course, the representation comes with inference rules and tools for editing sentences, and so you obtain a rich application.

We are hoping to soon release Hyperproof, an application which allows you to build heterogeneous proofs involving first-order logic and blocks world diagrams similar to those used in Tarski's World (also part of Language, Proof and Logic, or available as part of the Tarski's World courseware package).  We build Hyperproof by adding both representations to the Openbox, and in addition adding a heterogeneous component that allows the two representations to work together.

At Diagrams 2014, we presented a program that we called HyperVenn which is a heterogeneous reasoning system for the previously mentioned blocks world diagrams together with Venn diagrams.

So in general, Openbox can be used to construct heterogeneous reasoning systems involving arbitrary, and arbitrarily many, representation systems.

Saturday, June 20, 2015

Single Feature Indicator Systems

Atsushi Shimojima and I are beginning a project to investigate diagrammatic representations from a generic perspective.  By this we mean that, instead of considering the affordances of individual diagrammatic representation systems, for example making an in-depth study of Venn diagrams, we plan to take the approach of looking a features shared by many representation systems, and studying the affordances that arise from those features.

As a starting point, we looked for the simplest kind of representation system that we could.  We came up with what we call Simple Feature Indicator Systems (SFIS).  An SFIS consists of a collection of roles paired with a collection of values, with the constraint that each role has exactly one of the values.

That's an abstract definition, here's a concrete example.  A (standard) traffic light can be represented by an SFIS having three roles: the red, yellow and green lights.  Each light can be either on or off - these are the values.

When we see a diagram of a traffic light, it tells us something about some actual traffic light.  If the representation shows that the value of the red light role is "on", then we expect the red light in the corresponding actual traffic light to be illuminated.  This is the nature of "indicator" in "Single Feature Indicator System" (we call a collection of roles and values, a feature).

In the general definition of SFIS, there are no constraints between the values that different roles may take.  For example, in the SFIS described, there is no constraint that says that both the red and green lights roles cannot have the value "on".  That constraint, if you want it, would be part of an extended representation system.  Of course, if you want to describe potentially defective real traffic lights, then you don't want to enforce this constraint in the representation.

It is possible to build an inference system on top of an SFIS.  Its pretty boring, but there is an inference rule which allows us to detect contradictions.  If I am shown two representations of the same traffic light, and in one the red role has the value on, and in the other the red role has the value off, then this is plainly a contradiction (just as if I tell you both P and not-P about a world).  Such an inference rule is available in all SFIS, and is a consequence of what it means to be an SFIS.

Traffic lights, of course, are not the only instances of SFIS.  A work schedule, which shows which people work on which days, is another.  The roles are things like (Dave,Wednesday) and the values are perhaps "working" and "not working".  A graph, for example, a tube map, is also an SFIS, with a role for each connection in the map, for example (Victoria,Green Park), and the values either "directly connected" or "not directly connected".

Value sets don't have to be binary, nor even finite.  If we want to represent train timetables then the roles can be pairs of stations and trains, with the value representing the time at which the train will be at the station.

The Thrilling Adventures of Lovelace and Babbage

When I teach, I often recommend books for fun related reading.  These typically include Logicomix, by Apostolos Doxiadis and Christos Papadimitriou and David Foster Wallace's Everything and more.

I am going to be adding The Thrilling Adventures of Lovelace and Babbage by Sydney Padua to my list. This new graphic novel tells the story of Ada Lovelace and Charles Babbage, and the inventions of the Difference and Analytical Engines.

The book primarily focuses on Lovelace, and the first part tells the true story of how Lovelace and Babbage met and collaborated.  There's a cast of colorful characters - all of the leading thinkers of the day, it seems. 

The main panels are black and white and evince a plainly Victorian feel, even while using all of the tools of a modern graphic novelist.  In addition, there are  footnotes, and endnotes to the footnotes, which add an academic feel and some pretty dense context.  Fun fact: Augustus de Morgan was Lovelace's math teacher.

In the second, much longer part of the book, Padua imagines what might have happened had Lovelace and Babbage succeeded in building the Analytical Engine in Vicotrian England.   This is not a new conceit, of course, Sterling and Gibson's The Difference Engine, is the predecessor that most readily came to my mind.  Padua takes Lovelace and Babbage through many adventures in this invented "pocket universe", and enables us to meet, Charles Dodgson (Lewis Carroll), and Queen Victoria herself, among other characters.

Finally, in an appendix we are treated to a graphical description of the workings of the Analytical Engine.

Its fun stuff.  Padua is clearly fascinated, if not obsessed, with her subjects.  Her passion comes through in every page, and brings a joyousness to the book which is infectious.  Don't miss it.

Saturday, June 6, 2015

Some Thoughts on Ex Machina

I recently saw the new movie Ex Machina, by writer/director Alex Garland. The movie concerns an inventor (Nathan, played by Oscar Isaac) who invites a minion from his company (Caleb, played by Domnhall Gleeson) for a week at his remote estate. Caleb is to play the role of interrogator in a Turing-test of Nathan's most recent invention Ava.  Nathan wants an independent opinion on whether Ava is an AI.

This post is primarily about philosophy, but if you read on, you should probably expect spoilers.

Ava (played by Alicia Vikander) has a humanoid form- possibly designed by Apple. But Ava is obviously an android - we get to see the machinery within Ava's transparent neck and torso, and she (sometimes, but not always) makes subtle machine-like sounds when she moves.

Early in the movie Nathan asserts that this version of the Turing test, where you can obviously see that the testee is a robot, is much better (harder to pass, we assume) than Turing's formulation (Computing Machinery and Intelligence, A. M. Turing, Mind, Vol. 59, No. 236 (Oct., 1950), pp. 433-460). This proposition is accepted by the interrogator, but I am not so sure.

Turing explicitly sets up the imitation game so that the participants cannot see one another. In the original imitation game, which Turing is repurposing, the goal was to tell which of the participant is a man, and which a woman, so this is perhaps necessary.  But Turing retains this setup. He mentions that this is so that the interrogator is not influenced by "tone of voice", but one assumes that embodiment would be equally concerning.

For me, this is the right decision. We habitually ascribe intelligence to the people that we meet. This is our default response for good reason. By performing the test in the way that the movie assumes, with a walking talking humanoid device, I think that most interrogators would be more, rather than less, inclined to ascribe intelligence to the device.

There are other issues with the philosophy in the movie.  Neither Nathan nor Caleb are quite sure exactly what they are testing for.  At the beginning of the movie, Caleb is quite clear about the Turing test as a test for intelligence.  Later in the movie, both men discuss the test-in-progress, referring at times to consciousness and life.  I'm not sure whether we're supposed to assume at this point that the test for intelligence is passed, and that they are moving on to further considerations.  I think not.  It seems to me that the writing is just confused.

The social politics of the film are quite troubling.  It is not explained why the mute prototype android, Kyoko (played by Sonoya Mizuno), who serves in the kitchen and bedroom, is cast as an Asian woman.  The switch from Asian features to Caucasian in the development of the AI is never explained, and the role of Kyoko just seems to play into the worst stereotypes.  Oh, and the fact that Ava has those servo sounds, and Kyoko does not seems like a regression to me.