The CC3000 and “always-on” sensing

Update 29/4/14

Actually, after testing for a few days, the CC3000 still becomes reliably unresponsive after ~72 hours up even after forcibly rebooting. While power-cycling it is effective to prolong run-time, afaict the firmware doesn’t seem to be up to being on for significant periods of time so buyer beware.

End Update

The CC3000 is a nifty little wifi sensor and the Adafruit shield for it is a great little sensor: small, easy to work with, fairly power conservative, fairly robust. They have a very friendly library for working with it developed by Adafruit that I’ve been using for a space sensing project that I’m doing at work and all was going smashingly, mostly, usually.


There is a hitch though: the CC3000 can get messed up if it’s on a network that asks its clients to refresh every so often. By “messed up”, I mean not being able to connect to a network or send any data or even respond. This makes for some frustrating debugging as in my case this was happening every 8 or 9 hours or so, meaning that I’d set up my sensors, work all day, go home and look at my data feeds to find that the sensors had suddenly stopped sending data. Sometimes it was 8 hours, sometimes it was 12, sometimes it was 6. I tried just about everything available: reboot(), disconnect(), stop() in every permutation of order possible. Finally someone on the Adafruit forums helpfully pointed me to the a post on the forum. Here’s the juicy bit:

The root cause of the CC3000 failures that lead to CFOD [error] is buffer starvation and/or allocation issues that can result in a deadlock. The situation is that the CC3000 has buffers to send, but finds it also needs buffers for either ARP packets or TCP protocol packets before it can proceed and transmit the data, but there are none available…In addition, the current behaviour of the CC3000 is to continually update its ARP cache based on packets it receives, regardless of whether those packets form part of ongoing traffic through the CC3000. With this behaviour, if the ARP cache is already full, then a random ARP cache entry is chosen and replaced. In the Core, if that ARP entry ejected is the one for the default gateway, and there are already packet(s) enqueued in the CC3000 ready to be sent, then the CC3000 must ARP to find the MAC address of the default gateway. This is apparently when the CC3000 can find itself in a deadlock, needing buffers to send the ARP request and process the reply, but not having any available. So there are a series of events, each individually contributing to the probability that the CC3000 will fail.

The Address Resolution Protocol (ARP) cache is a map on a client, like the CC3000, that associates a MAC address (an identifier for a device) with an IP address (an identifer for a device on a network). It’s pretty important for a device to have a correct ARP cache, i.e. that it knows what its IP address is on the network and be able to broadcast that out so that other things can find it and conversely it can correctly register with the network since routers like to know what’s connected to them for good reason. The CC3000 is constantly filling a limited table buffer with any ARP requests that it intercepts. This isn’t good and I’m not sure how that could be desirable behavior (I’m not a network engineer though, so I’m probably missing something important) because limited space means that at some point the ARP cache is going to fill up and the first thing in our CC3000s ARP cache, it’s own MAC address -> IP mapping, is going to get lost. No bueno. Moreover, getting a new ARP record isn’t that hard but it sounds like the CC3000 is waiting to send its packets before getting a new ARP record.

The folks have a fix for their core that’s going to address this and at some point in the future the CC3000 core released by Texas Instruments will address this but for the moment there’s not much we direct users of the CC3000 can do except reboot the CC3000 every so often. Fortunately this is fairly easy: control the power to the CC3000 off a digital pin on the Arduino/ATmega, check to make sure it’s running properly, if it’s not, switch the pin off, count to 20 (milliseconds that is), switch it back on, start up from scratch. [I’m adding this part!] How do you turn the power on/off? Use an NPN transistor to control the power to the CC3000. I’m using the PN2222 with a 2.2k resistor on it.

Untitled Sketch_schem

The internal firmware of the CC3000 simply can’t deal with the deadlock on its own, even using its wlan_stop() and wlan_start() methods, so we give it a hand by rebooting the RTOS on the chip. My code using the Adafruit CC3000 library looks like this:

// we didn't connect at all?

    // reboot everything
    boolean disconnected = cc3000.disconnect();

    // reset everything in CC3000
    cc3000 = Adafruit_CC3000(ADAFRUIT_CC3000_CS, ADAFRUIT_CC3000_IRQ, ADAFRUIT_CC3000_VBAT);

    digitalWrite(WIFI_POWER_PIN, LOW);
    digitalWrite(WIFI_POWER_PIN, HIGH);


    // wifi
    if (!cc3000.begin()) {

    while( cc3000.getStatus() != STATUS_CONNECTED )
      // Connect to AP.
      if (!cc3000.connectToAP(WLAN_SSID, WLAN_PASS, WLAN_SECURITY)) {

      // Wait for DHCP to be complete
      while (!cc3000.checkDHCP()) {

That’s really simplistic and there’s plenty of situations in which this could royally screw things up for, but for me, dropping a few readings while we reboot isn’t a big deal at all.

ATTinys, magnets, and bathroom doors

At the Seattle frog office we have 40 male employees and one bathroom stall and you can imagine how that works out, can’t you? Particularly after lunch there is a bit of a logjam at the stall door (worst. pun. ever.). I decided that what we needed was a simple way for people not in the bathroom to know whether someone was in the bathroom. So I got an ATTiny85, RF Tx/Rx pair, and a Hall effect sensor and went to work.


Because I hate having to change batteries all the time I want to keep this as power-efficient as possible, hence the wake routine. The Manchester library is my fork of it. The main repo doesn’t work for me on the ATTiny85 post this commit. YMMV. Other than that, I poll a Hall effect sensor, and send a very sophisticated byte: ‘0’ for open, ‘1’ for closed.

#include <avr/sleep.h> //Needed for sleep_mode
#include <avr/wdt.h> //Needed to enable/disable watch dog timer
#include <manchester.h>

int ls;
boolean occupied = false;
unsigned char data[1];

#define HALL_PIN 3

void setup() 
  pinMode(HALL_PIN, INPUT);

void loop()
  ADCSRA &= ~(1< <ADEN); //Disable ADC, saves ~230uA
  setup_watchdog(7); //Setup watchdog to go off after 1sec
  sleep_mode(); //Go to sleep! Wake up 1sec later and check water
  ADCSRA |= (1<<ADEN); //Enable ADC
  int hall = analogRead(HALL_PIN);
  if(!occupied) { 
    if(hall < 200 || hall > 800) // either side of the magnetic field
      data[0] = 2;
      MANCHESTER.TransmitBytes(1, &data[0]);
      occupied = true;
  else // we're occupied
    if(hall > 200 && hall < 800) // either side of the magnetic field
      data&#91;0&#93; = 1;
      MANCHESTER.TransmitBytes(1, &data&#91;0&#93;);
      occupied = false;

//Sets the watchdog timer to wake us up, but not reset
//0=16ms, 1=32ms, 2=64ms, 3=128ms, 4=250ms, 5=500ms
//6=1sec, 7=2sec, 8=4sec, 9=8sec
void setup_watchdog(int timerPrescaler) {

  if (timerPrescaler > 9 ) timerPrescaler = 9; //Limit incoming amount to legal settings

  byte bb = timerPrescaler & 7; 
  if (timerPrescaler > 7) bb |= (1< &lt;5); //Set the special 5th bit if necessary

  //This order of commands is important and cannot be combined
  MCUSR &= ~(1<<WDRF); //Clear the watch dog reset
  WDTCR |= (1<<WDCE) | (1<<WDE); //Set WD_change enable, set WD enable
  WDTCR = bb; //Set new watchdog timeout value
  WDTCR |= _BV(WDIE); //Set the interrupt enable, this will keep unit from resetting after each int

//This runs each time the watch dog wakes us up from sleep
ISR(WDT_vect) {
  //Don't do anything. This is just here so that we wake up.

On the Rx side I just have an Uno listening and lighting an LED.

#include <manchester.h>

unsigned char bufferSize = 1;
unsigned char *bufferData;

boolean occupied = false;

void setup()
  pinMode(3, OUTPUT);
  pinMode(4, OUTPUT);
  bufferData = (unsigned char*) malloc(1);

  // Set digital TX pin
  // Prepare interrupts
  // Begin receiving data
  MANRX_BeginReceiveBytes(1, bufferData);

void loop()

if (MANRX_ReceiveComplete())
    unsigned char receivedSize = 1;
    MANRX_GetMessageBytes(&receivedSize, &bufferData);

    if(bufferData[0] == 1 ) {
      digitalWrite(3, HIGH);
      digitalWrite(4, LOW);
    } else {
      digitalWrite(4, HIGH);
      digitalWrite(3, LOW);
    MANRX_BeginReceiveBytes(1, bufferData);


With the cover off:


Under that is a Hall Effect sensor:


Here’s our notification light out in the main room. Subtle, no?


learning “things” and “learning things”

Innumerable times I’ve found myself infatuated with something, some concept I can’t quite put my finger on but that nags at me incessantly. As these are sometimes complex things or I’m just not that smart, or a combination of the two, these things lead me marching down some blind alley, researching things I barely understand. And then I slowly beginning to understand, some twinkling of intuition and then, slowly, I just get it. I get it. It makes sense, both the intuitive nature of it and the implementation details. And then…nothing. Then something else comes up. The spark dies out. The concept itself that got me into those weeds is gone, lost in a few days or weeks or months of frantically trying to carve out time to understand something. Ostensibly I did this so I could make something but no, in the end, I just did it for no reason. I get something but I don’t have anything. I didn’t get to put it to use, to understand what it means in the world, how it feels, relates to other things, have a chance to ruminate on it in context. In short, I got nothing. I got abstract knowledge, maybe useful in the future, maybe not. But nothing from it. Increasingly the most important thing that I took away from my time at CIID was what a remarkable waste of time this is. It’s an old habit I have, one that I’ve only recently begun struggling to break. It’s alsom something that I see others around me struggling with as well. Technique without praxis is, well, hollow. No application is, well, it’s nothing. It’s a great way to pass time but so is playing FIFA 14.

Node, V8, and C++

Mixing JS and C++? What? That’s only for Christopher Baker, right? Wrong! It can be for you too. I wanted to make a Node app that would generate a keypress. This is hard because on the computer that’s in front of me, keypress events can only be generated in ObjC or C++. Well that’s cool but it needs to be a server too and I don’t want to write a server in C++ or ObjC. Enter addons for Node! I had no idea what I was doing and 20 minutes later, I’ve got one. Wow. 20 minutes. I know. Take a minute to look at the goodies at the node site. If I can figure it out, you can too.

What that says basically is that there’s a really easy way to use C++ in Node because Node.js is built on Google’s V8 engine which means that C++ is it’s native language. Kinda. Funner yet, making plugins for Node is pretty much the same as making a plugin for Chrome or Chromium. Allow the mind to boggle. Anyhow, so all you need node-gyp. This makes a way for Node to compile CPP files using a nice python-y makefile that isn’t retarded like normal makefiles (disclaimer: normal makefiles are fine. Don’t hate me).

Ok, first:

sudo npm install -g node-gyp

I need node-gyp. node-gyp will barf if you don’t have Python and gcc. You have both these because you have a Mac or Linux machine.

Next, get these files:

Next, do:

node-gyp configure build

Next, do:

node server.js

BAM. You just called C++ from Node. You are now a complete hipster. Get your clear plastic framed glasses out and get awkward and fake.

V8 is sorta well documented, it’s thorough at least. I’m psyched about the idea of doing more with this.

Why Blog?

Usually, I post sensible things here on my blog and I am beginning to suspect that this is the reason that I do not blog that much. All that sense making is exhausting, particularly since I spend so many of my days at frog design making sense and when I live in Seattle, a city decidedly lacking in non-sense. So, in honor of that, I’m going to try to post here more often and to not struggle so much to make sense. I realize that’s a touch antithetical to what people usually do with a blog in general but I’d like to embrace a bit of incongruity, if for no other reason than that there’s a distinct joy in seeing small things both fit together and not fit together at the same time. I’ve simply got too many outlets in which to make no sense: Facebook, Twitter, Tumblr, tiny incoherent emails. It does make the idea of blogging a bit of a struggle. All this is to say: sorry. I’ve just been listening to Football Ramble too much when I should be blogging.

OpenCV + Java in Processing

I love Processing. I love OpenCV too. I do not generally love using them together too much: weird wrappers, spotty support, weird linking errors, etc. That, however, has changed because OpenCV 2.4.4 began supporting Java builds, which means that you can use OpenCV in your Processing sketch sans wrapper, sans 3rd party library. You simply get OpenCV installed, create a little library for it, drop in the main .jar and .dylib that the build process creates, and off you go. In the spirit of making that easy, I made a small Processing library that facilitates going between Java and OpenCV and I’d like to explain how you can use it.

OpenCV + Java

First things first, you need to go to the ocvP5 github repo, download it and drop it into the libraries folder of your Processing main sketch folder ( this is probably something like home/Documents/Processing ). Once that’s done, you need to get OpenCV installed. Let’s walk through a few ways to do this:

1) OSX + MacPorts OpenCV

OSX people, you do have MacPorts installed, right? If not, remedy that. Once you’re ready with MacPorts, install OpenCV:

sudo port install opencv +java

The +java is important to make sure that the Java are built. Now, figure out where the Java is:

port contents opencv | grep java

These are the two files that you should copy to your OpenCV installation location:

/opt/local/share/OpenCV/java/* $PROCESSING_HOME/libraries/ocvP5/library

Now you’re ready to go.

2) Build your own OpenCV

This is what you can do if you’re using Linux, Windows, or don’t want to use MacPorts. I’m going to gloss over it a bit because it’s well documented elsewhere on the OpenCV site. First, there’s a checklist of things you need before you build:

Python 2.6 or higher

All of these are installable with a package manager on Linux. On Windows, go ahead and download and run installers. Now you’re ready. Get a terminal and run:

git clone git://
cd opencv
git checkout 2.4
mkdir build
cd build

On Linux/OSX you want to generate a Makefile:


or on Windows a MS Visual Studio* solution

cmake -DBUILD_SHARED_LIBS=OFF -G "Visual Studio 10" ..

Note When OpenCV is built as a set of static libraries (-DBUILD_SHARED_LIBS=OFF option) the Java bindings dynamic library is all-sufficient, i.e. doesn’t depend on other OpenCV libs, but includes all the OpenCV code inside.
Examine the output of CMake and ensure java is one of the modules “To be built”. If not, it’s likely you’re missing a dependency. You should troubleshoot by looking through the CMake output for any Java-related tools that aren’t found and installing them. If CMake can’t find Java in your system set the JAVA_HOME environment variable with the path to installed JDK before running it. E.g.:

export JAVA_HOME=/usr/lib/jvm/java-6-oracle

Now start the build, on Linux/OSX:


or MS:

msbuild /m OpenCV.sln /t:Build /p:Configuration=Release /v:m

Besides all this will create a jar containing the Java interface (bin/opencv-244.jar) and a native dynamic library containing Java bindings and all the OpenCV stuff (lib/ or bin/Release/opencv_java244.dll respectively). Now you simply need to copy opencv-245.jar from build/bin and libopencv_java245.dylib from build/lib into Processing/libraries/ocvP5/library and you’re good to go.

Edit thanks to Kasper Kamperman

On Windows, if you run into some of the issues described here you can check out the alternate way of installing OpenCV on your Windows machine that he describes in the comments [here](


This is really a very simple library that I’m beginning to put together. It’s not going to do much more than help you work with OpenCV code directly in Java and take advantage of the wonderful environment and library set that Processing offers. In many ways it’s very similar to the OpenCV block for Cinder or the ofxCv addon for openFrameworks: only what you need to get the two libraries talking to one another and nothing more. Let’s look at an example to see how this works:

Here’s a classic cascade based face detection example:

import ocv.*;

import org.opencv.core.*;
import org.opencv.calib3d.*;
import org.opencv.contrib.*;
import org.opencv.objdetect.*;
import org.opencv.imgproc.*;
import org.opencv.utils.*;
import org.opencv.features2d.*;
import org.opencv.highgui.*;

import java.util.Vector;

PImage pimg;
Capture cam;
ocvP5 ocv;
CascadeClassifier classifier;

ArrayList faceRects;

void setup()
// This is what you’ll load if you’re loading from MacPorts, otherwise this should be
// wherever you built the OpenCV libraries
System.load(new File(“/opt/local/share/OpenCV/java/libopencv_java245.dylib”).getAbsolutePath());

// make an ocvP5 object to convert file types (this will do a teeny bit more in the future)
ocv = new ocvP5(this);
size(640, 480);

String[] cameras = Capture.list();
cam = new Capture(this, cameras[0]);

// initialize with the classic face detection cascade file:
classifier = new CascadeClassifier(dataPath(“haarcascade_frontalface_default.xml”));

faceRects = new ArrayList();

void draw()
// if there’s a new frame in the camera:
if (cam.available() == true)
// get a PImage from the camera
pimg = cam;
// convert to OpenCV
Mat m = ocv.toCV(pimg);
// we want a grayscale image
Mat gray = new Mat(m.rows(), m.cols(), CvType.CV_8U);
Imgproc.cvtColor(m, gray, Imgproc.COLOR_BGRA2GRAY);

MatOfRect objects = new MatOfRect();

Size minSize = new Size(150, 150);
Size maxSize = new Size(300, 300);

// do the actual conversion, more info here
classifier.detectMultiScale(gray, objects, 1.1, 3, Objdetect.CASCADE_DO_CANNY_PRUNING | Objdetect.CASCADE_DO_ROUGH_SEARCH, minSize, maxSize);


// add all the rects to the faceRects array
for (Rect rect: objects.toArray()) {
faceRects.add(new Rect(rect.x, rect.y, rect.width, rect.height));

// draw the image
image(cam, 0, 0);

// now draw the detected face regions
for (int i = 0; i < faceRects.size(); i++) { rect(faceRects.get(i).x, faceRects.get(i).y, faceRects.get(i).width, faceRects.get(i).height); } } [/java] If you've worked with raw OpenCV in C++ or Python, this should all look pretty familiar to you. If not, it might look very different from what you're used to seeing. Most Processing libraries for computer vision work tend towards a few simple carefully wrapped methods. I'm trying to keep this library oriented towards the very minimal, so the only methods I'll be creating are simple conversion methods to change from Processing to OpenCV and back. Anything else you do can be done in straight-up OpenCV code, like the calls to detectMultiScale() you see in this code. That's normally wrapped in a "detectFaces()" type method, but that's a little deceiving because detectMultiScale() can be used to detect anything: faces, eyes, hands, stop signs, airplanes, the list is almost limitless. Exposing a little more of that means that you can start to experiment and explore and that's a good thing. This might seem tricky at first, but it means that your applications can more easily be modified and extended to do literally anything that OpenCV can do, and porting it to another platform is almost painless. That's all for the moment, but I'll be building out ocvP5 more over the coming weeks, so keep an eye out if you're interested.

7 Nascent Thoughts on Critical Artwork

Law is often an indequate discourse to publicly and intelligibly express the difficulties of legislating online space, digital privacy, surveillance, and the agency of our creations.

Art is potentially an adequate discourse to do these things.

Critical Design is an adequate discourse to do these things.

We require tactics to query legislation and norms around this politics, in an intelligible, public, and open way. In a way which isn’t couched in rhetoric. Not a strategy, not a critical stance, not in art galleries.

The erosion of public space, laws and practices curtailing individuals ability to document police action, are all of vital importance to expression, communication, socialization, and fundamentally “human-ness”.

We cannot adequately challenge a law without a legal case. The most expedient path to a challenge of a law is for someone to break it.

This is of course, bold, almost a crazed thing to ask of an artist. No one wants to go to jail, face increased surveillance, TSA pat-downs, any of the other potentially life-altering difficulties that a criminal record leads to. Yet I think that art is perhaps the only arena in which a discussion of these issues can take place. It is outside of the realm of direct commerce (by happenstance). It requires and is often granted the permission to be purely provocative and critical. It is one of the last vestiges of pure exploratory rhetoric. For that reason, it’s worth asking what questions a critical art can ask that any other type of investigation cannot ask.


[Design can be …] The stable platform on which to entertain unusual bedfellows. The glue for things that may not be naturally sticky. The lubricant that allows movement between ideas that don’t quite run together. The medium through which we can make otherwise awkward connections and comparisons. The language for tricky conversations and translations.

— Dr Ken Arnold, Head of Public Programs, The Wellcome Trust, London.