Mouse Modding

Last night I made some changes to my mouse; we’ll see if they’re improvements or not. Originally I was dismantling it to clean it. Something had gotten inside the wheel and was causing it to scroll inconsistently. Once I got inside, though, I saw an opportunity to tune it.

First was the clicking on the wheel. This is accomplished with a spring, and is sometimes annoying, especially since I like to give my scroll wheel big long spins to quickly move around on a page. Removing the spring was pretty easy.

Second, I noticed a weight added to the inside. I’m not entirely sure what the weight accomplishes, other than making the mouse that much harder to move, so I took it out, too.

If I need, I can easily put the parts back in, but at the moment I’m rocking a lighter, smoother mouse.

image

Laser Pointer Interface

A while ago I built a computer input device using a laser pointer and regular usb web camera.

It was a pretty simple setup, and I used a lot of existing tools as a jumping off point. Here’s a writeup of my work and details for how to replicate it and what I learned.

First, a video overview:

Materials

At a minimum:

  • A web camera
  • A laser pointer

Optionally:

  • A projector

Technically speaking, the laser is completely optional. In fact, during testing I just had a desktop computer with the camera pointed at a sheet of paper taped to a wall, and I drew with the laser pointer on that sheet of paper and used that as an input device. With the projector, you can turn the setup into a more direct input, as your point translates directly to a point on a screen. But that’s just a bonus. This can be done without the projector.

Physical Setup

Take the camera, point it at something. Shine the laser inside the area where the camera can see. That’s it in a nutshell. However, there are some additional considerations.

First, the more direct the camera, the more accurate it will be. If the camera is off to the side, it will see a skewed wall, and because one side is closer than the other, it’s impossible to focus perfectly, and one side of the wall will be more precise than the far side of the wall. Having the camera point as directly as possible at the surface is the best option.

Second, the distance to the surface matters. A camera that is too far from the surface may not be able to see a really small laser point. A camera that is too close will see a very large laser point and will not have great resolution. It’s a tradeoff, and you just have to experiment to find the best distance.

Third, if using a projector, the camera should be able to see slightly more than the projected area. A border of a few inches up to a foot is preferable, as this space can actually be used for input even if it’s not in the projected space.

Fourth, it’s important to control light levels. If there are sources of light in the view of the camera, such as a lamp or window, then it is very likely the algorithm will see those points as above the threshold, and will try to consider them part of the laser pointer (remember white light is made up of red, green, and blue, so it will still be above the red threshold). Also, if using a projector, the laser pointer has to be brighter than the brightest the projector can get, and the threshold has to be set so that the projector itself isn’t bright enough to go over the threshold. And the ambient light in the room can’t be so bright that the threshold has to be really high and thus the laser pointer isn’t recognized. Again, there are a lot of tradeoffs with the light levels in a room.

Software Packages

I wrote my software in Java. There are two libraries that I depended on heavily:

The JAI library is not entirely essential, as you could decide not to translate your coordinates, or you could perform your affine transform math to do it and eschew the large library that will go mostly unused. The neat thing about this transform, though, is that it allows for the camera to be anywhere, and as long as it can see the desired area, it will take care of transforming to the correct coordinates. This is very convenient.

The JMF library exists for Windows, Linux, and Mac. I was able to get it working in Windows, but wasn’t able to get it completely working in Linux (Ubuntu Jaunty as of this writing), and I don’t have a Mac to test on.

Basic Theory

The basic theory behind the project is the following; a laser pointer shines on a surface. The web camera is looking at that surface. Software running on a computer is analyzing each frame of that camera image and looking for the laser pointer. Once it finds that pointer, it converts the camera coordinates of that point into screen coordinates and fires an event to any piece of software that is listening. That listening software can then do something with that event. The simplest example is a mouse emulator, which merely moves the mouse to the correct coordinates based on the location of the laser.

Implementation Theory

To implement this, I have the JMF library looking at each frame. I used this frameaccess.java example code as a starting point. When looking at each frame, I only look at the 320×240 frame, and specifically only at the red value. Each pixel has a value for red, green, and blue, but since this is a red laser I’m looking at, I don’t really care about anything but red. I traverse the entire frame and create a list of any pixels above a certain threshold value. These are the brightest pixels in the frame and very likely a laser pointer. Then I average the locations of these points and come up with a single number. This is very important, and I’ll describe some of the effects that this has later. I take this point and perform the affine transform to convert it to screen coordinates. Then I have certain events that get fired depending on some specific things:

  • Laser On: the laser is now visible but wasn’t before.
  • Laser Off: the laser is no longer visible.
  • Laser Stable: the laser is on but hasn’t moved.
  • Laser Moved: the laser has changed location.
  • Laser Entered Space: the laser has entered the coordinate space (I’ll explain this later)
  • Laser Exited Space: the laser is still visible, but it no longer maps inside the coordinate space.

For most of these events, the raw camera coordinates and the transformed coordinates are passed to the listeners. The listeners then do whatever they want with this information.

Calibration

Calibration is really only necessary if you are using the coordinate transforms. Essentially, the calibration process consists of identifying four points and mapping camera coordinates to the other coordinates. I wrote a small application that shows a blank screen and prompts the user to put the laser point at each of the prompted circles, giving the system a mapping at known locations. This writes the data to a configuration file which is used by all other applications. As long as the camera and projector don’t move, calibration does not need to be done again.

Here is a video of the calibration process.

The Code

Here is the camera.zip (3.7mb). It includes the JAI library, the base laser application, the calibrator, and an example application that just acts as a mouse emulator.

Below are a couple snippets of the important stuff.

This first part is the code used to parse each frame and find the laser point, then fire the appropriate events.

/**
* Callback to access individual video frames. This is where almost all of the work is done.
*/
void accessFrame(Buffer frame) {
/***************************************************************************************/
/********************************Begin Laser Detection Code*****************************/
/***************************************************************************************/
// Go through all the points and set them to an impossible number
for (int i = 0;i<points.length;i++){
points[i].x = -1;
points[i].y = -1;
}
int inc = 0; //set our incrementer to 0
byte[] data = (byte[])frame.getData(); //grab the frame data
for (int i = 0;i<data.length;i+=3){//go through the whole buffer (jumping by three because we only want the red out of the RGB
//if(unsignedByteToInt(data[i+2])>THRESHOLD && unsignedByteToInt(data[i+1])<LOWERTHRESHOLD && unsignedByteToInt(data[i+0])<LOWERTHRESHOLD && inc<points.length){//if we are above the threshold and our incrementer is below the maximum number of points
if(unsignedByteToInt(data[i+2])>THRESHOLD && inc<points.length){//if we are above the threshold and our incrementer is below the maximum number of points
points[inc].x = (i%(3*CAMERASIZEX))/3; //set the x value to that coordinate
points[inc].y = i/(3*CAMERASIZEX); //set the y value to the right line
inc++;
}
}
//calculate the average of the points we found
ave.x = 0;
ave.y = 0;
for (int i=0;i<inc;i++){
if (points[i].x!=-1){
ave.x+=points[i].x;
}
if (points[i].y!=-1){
ave.y+=points[i].y;
}
//System.out.println(points[i].x + "," + points[i].y);
}
//System.out.println("-------------------");
if (inc>3){//if we found enough points that we probably have a laser pointer on the screen
ave.x/=inc;//finish calculating the average
ave.y/=inc;
PerspectiveTransform mytransform = PerspectiveTransform.getQuadToQuad(mapping[0].getX(), mapping[0].getY(),
mapping[1].getX(), mapping[1].getY(), mapping[2].getX(), mapping[2].getY(), mapping[3].getX(), mapping[3].getY(),
correct[0].getX(), correct[0].getY(), correct[1].getX(), correct[1].getY(), correct[2].getX(), correct[2].getY(), correct[3].getX(), correct[3].getY());
Point2D result = mytransform.transform(new Point(ave.x,ave.y),null);
in_space = !(result.getX()<0 || result.getY() < 0 || result.getX() > SCREENSIZEX || result.getY() > SCREENSIZEY);
if (!on){
fireLaserOn(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,in_space));
on = true;
}
if (in_space && !last_in_space){
fireLaserEntered(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,true));
}
//                System.out.println(result.getX() + "," + result.getY());
//                System.out.println(last_point.getX() + "," + last_point.getY());
//                System.out.println("----------------------");
if (result.getX()!=last_point.getX() || result.getY()!=last_point.getY()){
fireLaserMoved(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,in_space));
}
else{
fireLaserStable(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,in_space));
}
if (!in_space && last_in_space){
fireLaserExited(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,false));
}
last_time = 0;
last_point = new Point2D.Double(result.getX(), result.getY());
}
else if (last_time==5){//if it's been five frames since we last saw the pointer, then it must have disappeared
if (in_space){
fireLaserExited(new LaserEvent(-1,-1, ave.x, ave.y, (int)last_point.getX(), (int)last_point.getY(), (int)last_raw_point.getX(), (int)last_raw_point.getY(),in_space));
}
fireLaserOff(new LaserEvent(-1,-1, ave.x, ave.y, (int)last_point.getX(), (int)last_point.getY(), (int)last_raw_point.getX(), (int)last_raw_point.getY(),in_space));
on = false;
in_space = false;
}
if (ave.x>0 || ave.y>0 && ave.x<CAMERASIZEX && ave.y<CAMERASIZEY)
fireLaserRaw(new LaserEvent(-1,-1, ave.x, ave.y, -1,-1, (int)last_raw_point.getX(), (int)last_raw_point.getY(),in_space));
last_time++;//increment the last_time. usually it gets set to 0 every frame if the laser is there
last_raw_point = new Point(ave.x,ave.y);//set the last_point no matter what
last_in_space = in_space;
/**************************************************************************************/
/********************************End Laser Detection Code*****************************/
/*************************************************************************************/
}
public int unsignedByteToInt(byte b) {
return (int) b & 0xFF;
}

This next part is pretty standard code for adding event listeners. You can see which laser events are getting passed. I intentionally made it similar to how mouse listeners are used.

Vector<LaserListener> laserListeners = new Vector<LaserListener>();
public void addLaserListener(LaserListener l){
laserListeners.add(l);
}
public void removeLaserListener(LaserListener l){
laserListeners.remove(l);
}
private void fireLaserOn(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserOn(e);
}
}
private void fireLaserOff(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserOff(e);
}
}
private void fireLaserMoved(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserMoved(e);
}
}
private void fireLaserStable(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserStable(e);
}
}
private void fireLaserEntered(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserEntered(e);
}
}
private void fireLaserExited(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserExited(e);
}
}
private void fireLaserRaw(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserRaw(e);
}
}
  • This algorithm is extremely basic and not robust at all. By just averaging the points above the threshold, I don’t take into consideration if there are multiple lasers on the screen. I also don’t filter out errant pixels that are above the threshold by accident, and I don’t filter out light sources that aren’t moving. A more robust algorithm would do a better job and possibly identify multiple laser pointers.
  • I’m not the first person that has done this, though from what I can tell this is the first post that goes into so much detail and provides code. I have seen other people do this using other platforms, and I have seen other people try to sell this sort of thing. In fact, this post is sort of a response to some people who think they can get away with charging thousands of dollars for something that amounts to a few lines of code and less than $40 in hardware.
  • Something I’d like to see in the future is a projector with a built in camera that is capable of doing this sort of thing natively, perhaps even using the same lens system so that calibration would be moot.
  • You may have seen references to it in this post already, but one thing I mention is having the camera see outside the projected area and how that can be used for additional input. Because the laser pointer doesn’t have buttons, its input abilities are limited. One way to get around this is to take advantage of the space outside the projected area. For example, you could have the laser act as a mouse while inside the projector area, but if the laser moves up and down the side next to the projected area it could act as a scroll wheel. In a simple paint application, I had the space above and below the area change the brush color, and the sides changed the brush thickness or changed input modes. This turns out to be extremely useful as a way of adding interactivity to the system without requiring new hardware or covering up the projected area. As far as I can tell, I haven’t seen anyone else do this.
  • I have seen laser pointers with buttons that go forward and backward in a slideshow and have a dongle that plugs into the computer. These are much more expensive than generic laser pointers but could be reused to make the laser pointer much more useful.
  • Just like a new mouse takes practice, the laser pointer takes a lot of practice. Not just smooth and accurate movement, but turning it on and off where you want to. Typically releasing the power button on the laser pointer will cause the point to move a slight amount, so if you’re trying to release the laser over a small button, that has to be considered so that the laser pointer goes off while over the right spot.
  • This was a cool project. It took some time to get everything working, and I’m throwing it out there. Please don’t use this as a school project without adding to it. I’ve had people ask me to give them my source code so they could just use it and not do anything on their own. That’s weak and you won’t learn anything, and professors are just as good at searching the web.
  • If you do use this, please send me a note at laser@bobbaddeley.com. It’s always neat to see other people using my stuff.
  • If you plan to sell this, naturally I’d like a cut. Kharma behooves you to contact me to work something out, but one of the reasons I put this out there is I don’t think it has a lot of commercial promise. There are already people trying to, and there are also people like me putting out open source applications.
  • If you try to patent any of the concepts described in this post, I will put up a fight. I have youtube videos, code, and witnesses dating back a few years, and there is plenty of prior art from other people as well.

TV Remote Alarm

We had an interesting problem at work. There’s a display in the main lobby of my building that shows the calendar of all the conference rooms and a map showing where they are in the building. It’s pretty handy for visitors and looks really slick. The problem, though, is night. There’s no point in having the display running 24/7. But the TV has a flaw where it won’t go into sleep mode when the HDMI cable is plugged in, even if the computer itself is asleep and there isn’t a signal.

The solution so far has been for a select few to turn it on in the morning when they arrive and off when they leave. Naturally, this isn’t a sustainable or reliable solution, as it doesn’t take a lot for the system to break down.

So Ian brought me in on the problem to see what I could do with it. I thought about some existing options. An outlet timer would work for turning it off in the evening, but not for turning it on in the morning (it would give the TV power, but not turn it on). I even found an alarm clock that was capable of being programmed to turn on and off a TV, which was really close to what we wanted, but it was discontinued, and reading into the manual it looked like it wasn’t going to work anyway.

I realized I would have to build something. I started off thinking of building off of the Arduino microcontroller board, which I’ve used for other projects and really enjoy using. I spent a day working on hooking up an infrared LED and trying to get it to output a standard on/off signal that the TV would recognize. I also tried to hook up an LCD screen and buttons for configuring the timer, but I quickly got frustrated as each part took way longer than I wanted, and wasn’t getting anywhere.

It made a lot more sense to work with existing electronics and cobble something together. It turned out I already had an alarm clock that I had stopped using in favor of my cell phone, and the alarm clock had two configurable alarms on it. And Ian had purchased for me a cheap universal remote. So I just had to get the alarm clock to trigger the remote control.

This was easier said than done. First I took apart the remote control. I followed the traces back from the on/off button and soldered a couple wires to them, then fed them out the back of the remote through a hole where the battery cover was. Next, I opened the alarm clock and went about trying to identify triggers I could use to determine the alarm state. I was hoping for something simple, like one node being +5V when the radio alarm was on and a different node being +5V when the buzzer alarm was on. Sadly, there was no such luck.

I’ll spare most of the details, but I never found a clean signal I could use. I ended up taking the radio alarm, cutting out the speaker, turning the volume all the way up, and using the speaker wire to drive two relays, which triggered the remote, then also fed to the alarm reset button. That way the radio would turn on, the signal would trip the remote, and it would reset the alarm. That one worked pretty slick.

It was even harder for the buzzer alarm. Not only could I not find a signal, but it didn’t go to the speaker, either. It went to a separate piezoelectric speaker, and the voltage to it wasn’t enough to trip the relays. So I had to build an amplifier circuit that bumped the signal up to something that would trip the relay. But then there was another problem. It was tripping the alarm reset button faster than it was tripping the remote, so it’d reset the alarm before the remote control had a chance, and the TV wouldn’t ever get switched. I fixed this by putting in an RC delay circuit on the alarm reset relay.

I put it all back together and tested it out. It’s in my apartment, so I had to try it out on the VCR (I had to take it out of its box), but it worked. The alarm clock dutifully turned off and on the VCR at the right times.

I’m bringing it in to work tomorrow to see if it’ll work on the intended television. It’ll probably sit on a counter across the lobby and point at the TV, and definitely have a sign that says what it is so people don’t get suspicious.

Here’s a picture of the completed project. I won’t show the insides because I’m a little embarrassed of the circuit. I could have done a much cleaner and more correct design, but it works now, so I’m happy. I hope people at work appreciate it, too.

Hard Drive Surgery

A friend of mine recently had a minor emergency when a portable hard drive was knocked off a table and ceased to function. I was called in to help. Indeed, it did not work. When plugged in (and I tried on multiple computers and operating systems), it wouldn’t be able to recognize the device.

Since there was nothing I could do externally, I opened up the case, careful to make sure that anything I did could be undone. The case wasn’t even screwed together; it was two pieces of plastic that snapped together. After unsnapping all the way around, the hard drive was exposed. Again, no screws. It was held fast with some rubber strips on the corners. There was a piece of aluminum foil covering the electronics, so I carefully peeled that back. Glancing at the board, I didn’t see anything wrong immediately. The board was attached to the hard drive, and was easy to pull off. It turned out, the hard drive was a standard SATA connection, so I turned off my computer, plugged it in, and turned the computer on. It had no problem recognizing the hard drive and mounting it. I created a folder on my computer and immediately copied all the files over without any problems. Next I compared the file sizes to make sure I had gotten all the files and they added up to the right size. After that, I turned off the computer and removed the hard drive.

Looking again at the board, I noticed a small part near the USB connection that was askew. Looking more closely, it was indeed broken off the board and hanging by only one of the four solder points. The board was so small, though, and the connections tiny. I tried heating up the soldering iron and getting in there, but there was no way I’d be able to resolder it on. Just too small. I told my friend the data was fine and that the board was not and that if she got another portable hard drive I could copy the files over to it.

She brought me a new portable hard drive, so I plugged it in, copied the files, checked the size to make sure it was all copied, and unplugged it. Then I brought her the new hard drive, the old one, and showed her the parts and what had happened. Since the hard drive was still good, it didn’t make sense to discard it. It’s a 120GB laptop hard drive. She’s going to confirm that everything is there, and then I’ll delete the copy of the data I have on my hard drive.

The whole operation was surprisingly easy, and it certainly helped that the portable hard drive was so simply designed and used standard connections. I’m glad we were able to recover everything, though a little disappointed I couldn’t resolder the part back on.

VAST Contest 08

One of the things that sucked up a week of my evenings this summer was the VAST contest. VAST is a visualization conference, and I had decided to submit to a contest they were hosting. The premise of the contest is to use or develop tools to analyze a dataset and discover the threat. They provided sample datasets, and our job was to look at them and find out what was going on, who were the suspects, and how the social network was organized.

It happens that two of the major pieces of software that I’ve been developing at the lab do exactly that, and I figured it would be a great opportunity to show off the tool and see how well it works. So I downloaded the dataset and promptly forgot about it until one week before the entries were due. Then I worked like mad and submitted at the last possible minute. I was at the lab till 1 or 2am each day that week working on the datasets and my software and tweaking and exploring and writing up my results and putting together video explanations.

The contest was divided into four completely separate challenges. The first had to do with edits to a wiki page. We were given a fake wiki page and all the edits to it and were told to look at the edits and determine who was on what team, and if any of the teams had any malicious intent. I used one of my programs first to filter out a lot of the junk edits and grammar fixes and spam, then filtered by number of contributions to find out who were the key players. Then I read through the conversations and split the teams up by who was arguing with each other, eventually coming up with a pair of teams. It was a lot more complex than that, but that was the gist.

The second challenge was migrant boats. We were given an XML file that contained fake coast guard interdictions, where boats bound for the Florida coast were stopped by the Coast Guard. There was a lot of metadata associated with the interdictions. For this one, I used a custom Google Map to plot the interdictions, then had a slider bar that showed me where they were taking place over time. I also used color coded markers to show me the kinds of boats used, the number of deaths, where they landed, and other interesting statistics.

The third challenge was cell phone calls. For this one we were given a list of cell phone records that included from, to, tower, date, and duration. We had to figure out who was who by the calls they made, and determine the whole network and who was doing what just from that data. I came up with some interesting results using color-coded tables and my network graphing tool. I also was able to plot the calls on a timeline and showed how some people appeared to be on conference calls because they overlapped their calls a lot.

The final challenge was my favorite, and the one on which I spent the most time. I had to write a lot more software for this one, too. We were given a fake building and fake locations of the occupants of the building over time. We had to look at the data to determine what happened when, who was a suspect, who was a witness, who was a casualty, and anything else interesting. I wrote software that let the user choose which people to watch and over what time period, so you could scroll around and see interesting things. Here’s a picture of it:

If you want to see my whole entry for the contest, you can go here: http://www.bobbaddeley.com/vast08/. Each of the sections has my evaluation as well as a video of me describing how I approached the problem. In the end I didn’t win any awards, but I was the only applicant from PNNL, and I think I was the only team that was a single person. I think I’ll be a lot more prepared for next year, and I fully intend to win some awards.

An invention of mine

Here is a video of something I put together not too long ago. It’s me using a laser pointer to control a mouse on my projector. I have a projector that I hooked up to my computer. I also hooked up a webcam and wrote some software that analyzes the camera  image to pick out the bright red dot. Then I use it as an input device. It’s pretty simple, and works very well. I wrote a little paint program, and a dart game, and the game missile command, which makes a lot of sense in this kind of environment. Eventually I’ll put a few more games together try for multiple point recognition so I could do group activities and games with it.

Surgery on my baby

Yesterday my projector was acting up in a very unfriendly way. There was something wrong with the lighting. It was uneven with a couple lines through it. Worse, when I moved the projector, the lines moved, and I could hear a part moving around inside. Eventually, percussive maintenance no longer affected any change in the image. My only real option was to take it apart. Projectors aren’t cheap, and they are built of complicated and sensitive parts, so I was reluctant to open it up. My first thought was a flaw in the light bulb. I took the bulb out and examined it, cleaned out some of the accumulated dust, and tried again. No luck. Then I removed the case to expose the inner workings. I couldn’t see any problems immediately, but I wasn’t exactly sure what I was looking for. Since it was approaching 2am, I decided to call it a night and worry about it the next day. Just in case, I ordered a new bulb, which will be delivered early next week. It turned out that the bulb was not the problem, but it’s too late to cancel the order, and it will be nice to have a backup bulb anyway.

Today I decided to give it another try. I dismantled it again, and this time turned it on while dismantled. It is incredible how much light comes through a tiny aperture. I could not look at it without some kind of protection. By following the light path, I was able to discover the problem.

The light goes through a very small square, perhaps half a centimeter wide. It enters this light tube composed of four small rectangular mirrors. The tube is about 3 centimeters long, and the four mirrors are attached along the long edges with some sort of glue. What had happened was that the heat from the lamp had weakened the glue, and one of the four sides of the square had dislodged. This led to the dimming because the mirror was no longer directing the light, and it explained the lines because the mirror was partially in the path of the light.

I was able to extract the square and, using scotch tape (bless the stuff), reconstruct the square tube and replace the fourth side. I am concerned that the scotch tape will melt or otherwise not hold, but if that happens I’ll look for a more formidable adhesive. I put everything back together, and my projector worked as well as it ever had.

It was a little intimidating working with an expensive piece of equipment whose mechanisms were mostly unfamiliar to me, but I’m glad that I was able to figure it out and fix it without damage.

Typical me

A few days ago at work it was snowing. I don’t have a window in my office, so I was getting regular updates on the rate of snowfall by more privileged coworkers. The terms they used were wildly inconsistent, though, and I thought there had to be a way to determine the actual rate of snowfall that wasn’t “kinda coming down softer now.” So I ran a little experiment. I took a black piece of paper and taped it outside in the courtyard. Then one coworker with a camera and a view of the courtyard hooked the camera up so that it was looking across the courtyard at the black piece of paper. I wrote a little bit of software that would take the image, analyze the part with the black paper, and see how many pixels were above a given threshold. The theory was that the varying levels of snowfall would block the paper as the snow fell. In other words, the falling snow would appear as white spots on the camera in front of the black paper. Then I could just count the number of white spots at any given second and have a number that represented the rate of snowfall at any given time.

Sadly, the weather made a mockery of my experiment and by the time I had set it up and was ready to test, it had stopped snowing. Still, it was fun to try out, and I think it was working correctly and just needed some tweaking, though the importance of such a task in the grand scheme of things is right up there with grooming shag carpet or arranging my spice cabinet by region of origin.

Computer Lights Show

Back in college I occasionally did DJ gigs. It was a lot of fun and I did some pretty neat things to make it easier. I had a remote control for my WinTV card and I remapped the buttons to control WinAmp, so I could control the music while I was dancing on the floor. Another thing I did was build some lights for inside my computer case.

When I originally built it, the lights were controlled by the internal serial port, and I wrote some software to advance the lights. I was even able to integrate the sound volume into it and had some rudimentary beat detection going so that the lights would change on the beat. Unfortunately, the system slowly degraded over time. The first problem was that the beat detection didn’t work when I upgraded from Windows ME to Windows XP. The next problem was that the external power supply died. Finally, I switched to Linux, so the software I had written to control it wouldn’t work.

In January 2007 I cleaned things up quite a bit. First, I connected the power to the computer’s power supply, thus removing the dependence on an external plug. Next, I set it up with a 555 timer chip and inserted a potentiometer to vary the speed of the flashing. I had to replace a light bulb, but the refurbishing took only a few hours. Now it seems to be working fairly well. See the pics and the video. There are 5 lights throughout the case, and they flash in order.

The circuitry is very simple. I have a decade counter which increments every time it gets a pulse. Each time it increments it turns on a different transistor, which powers a different light. On pin 6 it goes to the reset pin so that the decade counter only counts 1-5 over and over again. The 555 timer provides the pulse to the decade counter. If you want more details about the circuitry, contact me.

Video of the computer lights show in action (Windows Media Video (WMV) format, no audio)

 

Composite to S-Video

With the projector in my apartment, I have a VGA cable and an S-Video cable running as inputs. However, my Playstation2 cable only has composite video output. It turns out that composite can be hacked into s-video fairly easily. By running the composite video signal to both the brightness and color parts of the s-video you get a usable video feed. It’s not great quality, but it’s better than nothing. I also managed to find a Playstation2 cable that had s-video out, so it’s no longer an issue, but for a while I needed a solution.