7 quotes germane to Object Oriented Ontology

Ian Bogost

“Ontology is the philosophical study of existence. Object-oriented ontology (“OOO” for short) puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally—plumbers, DVD players, cotton, bonobos, sandstone, and Harry Potter, for example. In particular, OOO rejects the claims that human experience rests at the center of philosophy, and that things can be understood by how they appear to us. In place of science alone, OOO uses speculation to characterize how objects exist and interact.”

Ludwig Wittgenstein

“What is the meaning of the word ‘five’? No such thing is in question here, only how the word ‘five’ is used.”

CS Pierce

“Consider what effects which might conceivably have practical bearings we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.”

Paul Churchland

“Your brain is far too complex and mercurial for its behavior to be predicted in any but the broadest outlines or for any but the shortest distances in the future.”

Alain Badiou

“I think multimedia is a false idea because it’s the power of absolute integration and it’s something like the projection in art of the dream of globalization. It’s a question of the unity of art like the unity of the world but it’s an abstraction too. So, we need to create new art, certainly new forms, but not with the dream of a totalization of all the forms of sensibility.”

Bruno Latour

“…scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become”

Thomas Kuhn

“Contrary to Latours oft-repeated claim that politics has never taken technology seriously, totalitarian regimes stand out from traditional forms of authoritarianism precisely by the role assigned to technology as the medium through which citizens are turned into docile subjects, specifically parts of a corporate whole. While attention has usually focused on totalitarian investments in military technology, of more lasting import have been totalitarian initiatives in the more day-to-day technologies associated with communication”

World Map for data viz

All elements are tagged with nice labels, nations and US states. US states are a bit messy as I did them by hand, but the boundaries are approximately right

I wanted this for a while, mostly for doing “where are people” visualizations, but never could find a nice SVG file that had all of this in it. Now I do. Download it here

A Tool For Thinking versus A Tool For Doing Things

I was just teaching a workshop at CIID in Copenhagen where we introduced designers of various sorts, industrial, graphic, service, to Processing, generative design, and to the idea of Natural User Interfaces, i.e. the Kinect. I was happy with the results, what the students learned and what they produced (here for instance), but I walked away wondering the same thing I always wonder when I teach non-CS students programming: what’s the difference between thinking about code as a tool for doing things and thinking about code as a tool for thinking? What does that mean for making things and for talking about and teaching people code? Since I write about code, teach workshops, do documentation, these questions are, as they say, relevant to my interests.

Lots of times we, and by “we” I mean “programmers” think of code as a tool for doing stuff, we don’t care about the implementation underneath, we blindly reach into boost::asio, we fire up Rails without actually looking at what ActiveRecord actually does, we just do things. Make a website, a routine phone app, a CRUD application, these are things that I and I suspect many others, often do on auto-pilot. When I use code to think, I find myself usually looking at weird compiler behavior, hidden or little-known features of languages (example). I think about code, about computation, about what it means to think in code when I think in code. Sometimes I think about algorithms, but rarely. Sometimes I think about the behavior of an algorithm or the pattern in a signal or dataset, but then I’m not thinking in code, I’m looking through code, but I’m not really thinking with it, I want it out of the way so I can see what I’m looking at. So there’s really a few different levels of involvement here:

  1. Running Code
  2. Using Code
  3. Thinking Code

And those are interesting things to try to explain to students. I didn’t do a good job of it this time, and I regret that, and I’m going to rectify that mistake next time because, particularly in design and for designers, it’s important to tell people what the tool they’re picking up actually is. When you pick up a library in Processing that has clearly defined behavior, you’re usually doing something like #1: just running code. Maybe you change some colors, some parameters, tweak what things do, off you to see how your thing works. To someone raised on reading Hackers Delight, that’s complete heresy. To someone raised to think of tools as being nothing more than elements of a task, then that’s of course what you do. If you buy a hammer that doesn’t hammer your nails, then you go back to the hardware store and get another one, no? And that sort of makes sense, except that it also cripples you. You end up with the rote permutations on the low-lying fruit of algorithms. You treat programming like you treat Excel. Instead of invention, discovery, and experimentation, you get expressions, behaviors, and pre-defined routines that you can mix and match, but never alter. So, is that wrong? I’m not sure. I teach people to think of code as tools because it gets them started thinking that code is interesting. They can do things with it. They are, without a doubt, horrible programmers, and they will continue to be horrible programmers hamstrung by their tools, for perhaps years, or in all likelihood, forever. They implement things they barely understand, leverage things they don’t appreciate, and run into mistakes that they can’t comprehend. And, I think, that’s probably ok. It’s not a failure of mental capability on their part or failure to communicate on the part of their professors: it’s simply not necessary for being a designer working with interaction.

Thinking about things is using a medium to formulate an expression. Grammar, syntax, norms. These tools for creating novel expressions of the medium itself. Algorithms, data types and containers, device drivers, protocols, fundamental objects of a mode of interaction, I could go on, but these are a few things that strike me as being to code what poetry is to written language. A tool for thinking has codified norms and rules but generally a limited interface because the more complex the interface, the more limited the possibilities for expression. We limit the interfacing mechanics because to do so frees the mind. A tool for making something on the other hand has an intentionally limited interface, because we’re primarily concerned with having blocks of functionality be legible to us. It’s that last word that sums up what I think designers and non-traditional technical people need to have and what I was trying to teach at CIID: code literacy. Code fluency is a vital and profound tool for interaction design, but it’s a necessary one. Code literacy increasingly is a necessary tool, though how exactly that sits at the intersection of “thinking” and just “doing”, how to best fit tools to both of those scenarios, is something that I’ve yet to parse out completely. I’m far from the only one thinking of this. In fact, a great number of people who do interaction design, whether they know it or not, are working on this problem. That’s what makes it a good problem: it’s big, there’s a lot of ways to be wrong and right, and a lot of fruit that can be born from it, both in terms of thinking about how we learn things and thinking about how we think. Which is a long way of saying: I had fun in my workshop, and I’m looking forward to doing it again next year.

Processing SimpleOpenNI to Processing OpenCV

Random technote post:

Doing blob tracking with a kinect using Processing is surprisingly hard, as I’ve discovered today in the CIID Generative Design workshop I’ve been running, the below shows the trickery required to buffer RGB images from the kinect and then doing blob tracking on them:

import SimpleOpenNI.*;
import hypermedia.video.*;

SimpleOpenNI  kinectLib;
OpenCV opencv;

void setup() {
  
  kinectLib = new SimpleOpenNI(this);
  
  opencv = new OpenCV(this);
  
  if(kinectLib.enableRGB() == false) {
    
    println(" can't open RGB ");
    exit();
    return;
    
  }
  
  opencv.allocate(640, 480);
  size(640, 480);
  
}


void draw() {
  
  kinectLib.update();
  
  background(122);
  
  // copy the RGB image into opencv
  opencv.copy(kinectLib.rgbImage(), 0, 0, 640, 480, 0, 0, 640, 480);
  opencv.threshold(80);
  
  opencv.absDiff();
  
  image(opencv.image(), 0, 0, 640, 480);
  
  Blob blobs[] = opencv.blobs(10, width*height/2, 10, true, OpenCV.MAX_VERTICES*4 );
  
  fill(255);
  for(int i = 0; i< blobs.length; i++) {
    rect(blobs[i].rectangle.x, blobs[i].rectangle.y, blobs[i].rectangle.width, blobs[i].rectangle.height);
  }
}

Here's the somewhat unintuitive part: to save the image in memory, because the blobs() method (and findContours() underneath it) mangles the image, we do this:

void keyPressed() {
 
 opencv.copy(kinectLib.rgbImage(), 0, 0, 640, 480, 0, 0, 640, 480);
 opencv.threshold(80);
 opencv.remember(OpenCV.BUFFER); 
  
}

The OpenCV library has a restore() method that is supposed to handle un-mangling the image buffer, but it doesn't seem to work with buffered images, only ones captured from a Capture source.