This weekend I was busy. Saturday morning we went to the raceway in West Richland to watch some friends autocross. They weren’t scheduled to go until later, though, so we stayed for about an hour, then headed to another engagement; the Tri-City Wing Wars.

The Wing Wars was a competition among local restaurants to see which one had the best wing. For $5, anyone could enter and have access to unlimited wings from any of the stations, all of which hid their logos and their brand name. People could vote for their favorites, which would be revealed at the end.

In reality, more people showed up than they expected, and they ran out of wings after about an hour. Sadly, I was one of the reasons they ran out. I managed to consume two dozen chicken appendages in that hour, though to be fair I didn’t target my consumption at any particular vendor, though the Parmesan wings were my favorite. I saw a few friends there, but after the wings ran out, so did we, heading back to the track to see if we could see the friends race.

We arrived in time, and Ricky offered to let me ride with him. He has a nice convertible, so I was happy to accept. The helmet, though tight, wasn’t the claustrophobic experience I expected, and having the top down held the heat and gas smell at bay, so I was quite comfortable. The racing itself was a lot of fun, and I felt like the entire time we were pushing the car to the very edge of its capabilities, and that if Ricky pushed it any more or less expertly, we would certainly spin out. It felt like we hit every cone, but in fact he had a great race and only hit two.

Me and Ricky about to autocross

Later, Naomi raced with Ricky, which was fun to watch, and we could all hear her screaming (we assume with joy) occasionally.

Naomi and Ricky autocrossing

Finally, Dan was up, and he let Erin ride as his passenger. She put on her helmet and joined him in the car, a very sporty little thing into which Dan had obviously put a lot of care. I climbed the lookout tower and caught the ride on video; it turned out to be a great run, netting him first in his class for the day.

Dan

Dan and Erin are ready

After the autocrossing, we went to our friend Dimple’s house for pizza and a movie, then went back home.

Sunday morning, I was up early. I had planned to go hiking with Jim and Erin that day (even looked up several mens hiking boots and the best hiking poles compared side by side on review sites and had a pair delivered), but Erin bailed at the last minute, citing work. Jim and I still went, and it took about an hour to get to our destination; the white bluffs of Wahluke wildlife reserve. We chose a path that wasn’t exactly the famous white bluffs but less traveled. Half of our walk was on an old blocked off road, and the other half was along coyote trails that were barely visible. Along the way we could see the whole Hanford area.

Along the way was an old earth mover. The handle wasn’t attached to the door, but it was easy to insert and open. The key was lying on the cab floor. The battery, however, was dead. Still, it was good for a photo.

I took a few panos on the hike, too.

My biggest lesson from U.S. History class in High School didn’t have anything to do with History. It was a lesson that’s stuck with me ever since, and it has affected me in many aspects of my life. Getting things done is really intimidating, until you start.

Within the first month we had our initial assignment, but this was an AP class, not a regular history class. Our task was to write three essays, and we had a week to do it. I went through so many emotions; anger, outrage, frustration, hopelessness. I started to do the work but it was impossibly daunting. There was so much that I didn’t know, so much research to be done. Each question could have been a masters thesis and seemed to require citation of dozens of materials. I remember crying to my dad. He couldn’t do anything for me, though. Ultimately, I got it done, and on time, by just getting started.

My problem, and my major block, was knowing that I didn’t know enough. There was no way I could read all that I needed to to make a cohesive and complete argument. I had to give that up. The trick was to start with a decision already, start writing, then look for supporting facts in my research materials. I didn’t need to find the right answer; I needed to be able to defend my answer. This worked most of the time, because our lectures were usually on the topic of the essays and we had a good idea, but it didn’t always happen that way. Sometimes I would start with a thesis, but in the process discover that it was completely wrong. Since I had already started, and was already in the process, it was easier to go back and modify than it would have been to start over. Eventually I would get to the right answer anyway.

At the beginning of the school year, it took me the whole duration to write the three essays, which were assigned every week, but almost all of that time was just trying to get started. By the end of the year, and for the AP test, I was producing all three essays in under an hour, at roughly 2-3 pages each. Sure, I had gotten better at researching, but I’d also become less intimidated by the assignments.

That’s been a lesson for many things. With skydiving, so much worry and preparation goes into that first jump, but after getting used to going through the motions, it becomes routine, and what seems like an impossible feat to those who haven’t done it yet is just another day for someone else.

The truth is everything is intimidating until you start doing it. Then you learn a lot really fast. Then you become good at it. If we all just accepted that once we started doing something it would be fine, and skipped the intimidation step, life would be a lot easier for everyone.

Jump 29

October 10, 2010

Saturday morning I looked at the weather and it was not good. There was a good chance of rain, it was overcast, and the ceiling in Ritzville was 7500-9000 feet. My friend and I were supposed to go skydiving, but the weather made it seem like it might not happen. She called and they said it was still on, so we got in the car and drove up there. Once we arrived, I suited up and barely made the next load, and I’m glad I did. We went through some clouds on the ride up, but directly over the drop zone there wasn’t anything, so that was good. It was getting a little cold, but I was also at the door the whole ride. I was the first one out, and it was a great jump. This time I tried a few things; I tried going feet first, but immediately flipped and was on my head, starting to spin. Then I tried doing a couple cannonballs, which is fun because you go faster and you have no control over how you spin. I also played with some flips and turns and generally doing acrobatics and getting back under control, as well as looking around for other divers and getting awareness. My chute opened fine and I turned to watch the others, who were some still falling for quite a ways, which was cool to watch.

I practiced some riser turns before I unstowed the brakes and made a regular landing, this time standing up and on the grass. I was very happy.

I had some help packing my chute, but I wasn’t fast enough to get on the next load, which was unfortunate because that’s the one my friend was on. I watched her get on the plane and take off. Unfortunately for her, the clouds had rolled in overhead and they could only make it up to 7500 feet before they had to get out.

That was the end of the day, and they rolled the plane in to the hangar. I asked the instructor what he wanted me to do before he felt comfortable stamping my license card, and he was ok with it, so now I officially have an A license for skydiving. woo!

It’s been three years since I’ve been skydiving. Once Richland Skysports closed, I no longer had a local place to dive. It took the desire of a coworker to jump to motivate me to get back into it. I went the weekend before she was supposed to go so that I could take care of all the things I needed to to be able to jump with her the next weekend. After talking to the instructors and getting a refresher quiz, they let me do a hop and pop, jumping at 7000 and freefalling for ten seconds before pulling. It wasn’t the cleanest exit, but I didn’t hesitate to do it. I had no problem jumping out, but I didn’t have the best posture and struggled for a few seconds to get right. It happened quickly enough, though. I had no problem pulling my chute, and was happy playing around under canopy. I was a little high on the approach but still landed on the grass a very happy guy. They showed me a photo of me exiting the plane, and it was very embarrassing how bad my body position was on exit. It was no wonder I struggled on the first jump.

That was enough to satisfy the instructors, though, and they let me jump at 12500 for my next jump. I was excited to practice freefalling, and took the opportunity to do flips forward and backward, a barrel roll, turning, and some tracking. I had good altitude awareness and didn’t have any problem pulling at the right altitude. I landed on the grass again.

I had some help packing my own chute this time, and got back on the plane for the next ride. This time I played around some more with flips, adjusting my fall rate, tracking, and heading control. I had no problem with my opening. The wind changed, and the landing pattern was different; we were going left handed turns, which brought us over the top of the building. I was a little nervous about that and ended up going over the building a little high and overshooting the grass by a few feet. I landed on the dirt and slid to a stop.

There weren’t going to be any more loads, so I paid and came home, ultimately very satisfied with my day. Read my style=”text-decoration: none” href=”http://yourellipticals.com/proform-elliptical-reviews/”>ProForm Elliptical Reviews 2016 and let me know your thoughts on it.

It’s been over a year since I’ve posted, which is not cool. I’d like to say it was the fault of the technology I was using, and that it was a hurdle, but that’s just making excuses. A lot has happened in the last year. The girlfriend and I have traveled to Vietnam, Thailand, Vancouver, Belize, Guatemala, Seattle, Montana, and a bunch of other places. Work hasn’t been much different, but it’s always interesting. In fact, a lot has happened, but over the course of a year, it condenses quickly into only a couple sentences. This is no longer acceptable, so I’ve decided to change technologies, and devote more time to posting about the things that I do. We’ll see if it works out well.

What I Read

May 21, 2009

I’ve been doing a lot of interviews at work lately, and one of the questions I like to ask is what they read to keep current. In my job I am constantly evaluating new technologies and incorporating new things into our work, and it’s essential that I stay up to date with the latest in news, software development practices, gadgets, and just the field in general. I can’t count how many times I’ve seen something in my daily reading and used it in my work or at home. I rarely comment or contribute to the sites; I prefer just to watch and not participate in what’s usually a flame war by people with questionable qualifications. I read some of the sites at work, but most at home after work.

So here is my list of things I read daily in the industry:

  • http://slashdot.org – I’ve been reading this for 10 years and have only commented a few times, but I check this many times a day and have used information I’ve found on this site for all kinds of things.
  • http://news.yahoo.com – I read this a few times a day to keep up with the news in general. I’ve found this site is the best news aggregation site of the ones I’ve tried.
  • http://finance.google.com – I use this to track a few stocks and look at relevant business news and new pay day loans opportunity’s.
  • http://digg.com – I usually do this from home as it has interesting stuff in all kinds of categories.
  • http://gizmodo.com – This site is useful for the latest in gadget and technology news.
  • http://engadget.com – Almost identical to Gizmodo, and they often report on the same things, but sometimes they have a different interpretation.
  • http://joelonsoftware.com – Joel Spolsky’s blog. Somewhat diluted with his own advertising for his talks and conferences, but often has good articles on managing a tech company.
  • http://www.reddit.com/r/joel/ – This channel of reddit is for articles similar to the ones Joel writes.
  • http://thedailywtf.com/ – Every day an article or two about some curious piece of code or business practice.
  • http://www.lifehacker.com – Nifty tools and tricks for technology and geek life.
  • http://fark.com – Mostly curious or silly news, this is great for keeping up with the strange stories that are likely to come up in conversations.

That’s every day, sometimes a few times during the day. I’d say I spend about 2 hours reading stuff each day, though only about 1/2 an hour to an hour of that is at work, and usually in a few spare moments while I’m in between meetings or tasks.

This list does a pretty good job of keeping me up to date in the world.

A while ago I built a computer input device using a laser pointer and regular usb web camera.

It was a pretty simple setup, and I used a lot of existing tools as a jumping off point. Here’s a writeup of my work and details for how to replicate it and what I learned.

First, a video overview:

Materials

At a minimum:

  • A web camera
  • A laser pointer

Optionally:

  • A projector

Technically speaking, the laser is completely optional. In fact, during testing I just had a desktop computer with the camera pointed at a sheet of paper taped to a wall, and I drew with the laser pointer on that sheet of paper and used that as an input device. With the projector, you can turn the setup into a more direct input, as your point translates directly to a point on a screen. But that’s just a bonus. This can be done without the projector.

Physical Setup

Take the camera, point it at something. Shine the laser inside the area where the camera can see. That’s it in a nutshell. However, there are some additional considerations.

First, the more direct the camera, the more accurate it will be. If the camera is off to the side, it will see a skewed wall, and because one side is closer than the other, it’s impossible to focus perfectly, and one side of the wall will be more precise than the far side of the wall. Having the camera point as directly as possible at the surface is the best option.

Second, the distance to the surface matters. A camera that is too far from the surface may not be able to see a really small laser point. A camera that is too close will see a very large laser point and will not have great resolution. It’s a tradeoff, and you just have to experiment to find the best distance.

Third, if using a projector, the camera should be able to see slightly more than the projected area. A border of a few inches up to a foot is preferable, as this space can actually be used for input even if it’s not in the projected space.

Fourth, it’s important to control light levels. If there are sources of light in the view of the camera, such as a lamp or window, then it is very likely the algorithm will see those points as above the threshold, and will try to consider them part of the laser pointer (remember white light is made up of red, green, and blue, so it will still be above the red threshold). Also, if using a projector, the laser pointer has to be brighter than the brightest the projector can get, and the threshold has to be set so that the projector itself isn’t bright enough to go over the threshold. And the ambient light in the room can’t be so bright that the threshold has to be really high and thus the laser pointer isn’t recognized. Again, there are a lot of tradeoffs with the light levels in a room.

Software Packages

I wrote my software in Java. There are two libraries that I depended on heavily:

The JAI library is not entirely essential, as you could decide not to translate your coordinates, or you could perform your affine transform math to do it and eschew the large library that will go mostly unused. The neat thing about this transform, though, is that it allows for the camera to be anywhere, and as long as it can see the desired area, it will take care of transforming to the correct coordinates. This is very convenient.

The JMF library exists for Windows, Linux, and Mac. I was able to get it working in Windows, but wasn’t able to get it completely working in Linux (Ubuntu Jaunty as of this writing), and I don’t have a Mac to test on.I hope i can get an ipage host soon and then i can test for all of them.

Basic Theory

The basic theory behind the project is the following; a laser pointer shines on a surface. The web camera is looking at that surface. Software running on a computer is analyzing each frame of that camera image and looking for the laser pointer. Once it finds that pointer, it converts the camera coordinates of that point into screen coordinates and fires an event to any piece of software that is listening. That listening software can then do something with that event. The simplest example is a mouse emulator, which merely moves the mouse to the correct coordinates based on the location of the laser.

Implementation Theory

To implement this, I have the JMF library looking at each frame. I used this frameaccess.java example code as a starting point. When looking at each frame, I only look at the 320×240 frame, and specifically only at the red value. Each pixel has a value for red, green, and blue, but since this is a red laser I’m looking at, I don’t really care about anything but red. I traverse the entire frame and create a list of any pixels above a certain threshold value. These are the brightest pixels in the frame and very likely a laser pointer. Then I average the locations of these points and come up with a single number. This is very important, and I’ll describe some of the effects that this has later. I take this point and perform the affine transform to convert it to screen coordinates. Then I have certain events that get fired depending on some specific things:

  • Laser On: the laser is now visible but wasn’t before.
  • Laser Off: the laser is no longer visible.
  • Laser Stable: the laser is on but hasn’t moved.
  • Laser Moved: the laser has changed location.
  • Laser Entered Space: the laser has entered the coordinate space (I’ll explain this later)
  • Laser Exited Space: the laser is still visible, but it no longer maps inside the coordinate space.

For most of these events, the raw camera coordinates and the transformed coordinates are passed to the listeners. The listeners then do whatever they want with this information.

Calibration

Calibration is really only necessary if you are using the coordinate transforms. Essentially, the calibration process consists of identifying four points and mapping camera coordinates to the other coordinates. I wrote a small application that shows a blank screen and prompts the user to put the laser point at each of the prompted circles, giving the system a mapping at known locations. This writes the data to a configuration file which is used by all other applications. As long as the camera and projector don’t move, calibration does not need to be done again.

Here is a video of the calibration process.

The Code

Here is the camera.zip (3.7mb). It includes the JAI library, the base laser application, the calibrator, and an example application that just acts as a mouse emulator.

Below are a couple snippets of the important stuff.

This first part is the code used to parse each frame and find the laser point, then fire the appropriate events.

/**
* Callback to access individual video frames. This is where almost all of the work is done.
*/
void accessFrame(Buffer frame) {
/***************************************************************************************/
/********************************Begin Laser Detection Code*****************************/
/***************************************************************************************/
// Go through all the points and set them to an impossible number
for (int i = 0;i<points.length;i++){
points[i].x = -1;
points[i].y = -1;
}
int inc = 0; //set our incrementer to 0
byte[] data = (byte[])frame.getData(); //grab the frame data
for (int i = 0;i<data.length;i+=3){//go through the whole buffer (jumping by three because we only want the red out of the RGB
//if(unsignedByteToInt(data[i+2])>THRESHOLD && unsignedByteToInt(data[i+1])<LOWERTHRESHOLD && unsignedByteToInt(data[i+0])<LOWERTHRESHOLD && inc<points.length){//if we are above the threshold and our incrementer is below the maximum number of points
if(unsignedByteToInt(data[i+2])>THRESHOLD && inc<points.length){//if we are above the threshold and our incrementer is below the maximum number of points
points[inc].x = (i%(3*CAMERASIZEX))/3; //set the x value to that coordinate
points[inc].y = i/(3*CAMERASIZEX); //set the y value to the right line
inc++;
}
}
//calculate the average of the points we found
ave.x = 0;
ave.y = 0;
for (int i=0;i<inc;i++){
if (points[i].x!=-1){
ave.x+=points[i].x;
}
if (points[i].y!=-1){
ave.y+=points[i].y;
}
//System.out.println(points[i].x + "," + points[i].y);
}
//System.out.println("-------------------");
if (inc>3){//if we found enough points that we probably have a laser pointer on the screen
ave.x/=inc;//finish calculating the average
ave.y/=inc;
PerspectiveTransform mytransform = PerspectiveTransform.getQuadToQuad(mapping[0].getX(), mapping[0].getY(),
mapping[1].getX(), mapping[1].getY(), mapping[2].getX(), mapping[2].getY(), mapping[3].getX(), mapping[3].getY(),
correct[0].getX(), correct[0].getY(), correct[1].getX(), correct[1].getY(), correct[2].getX(), correct[2].getY(), correct[3].getX(), correct[3].getY());
Point2D result = mytransform.transform(new Point(ave.x,ave.y),null);
in_space = !(result.getX()<0 || result.getY() < 0 || result.getX() > SCREENSIZEX || result.getY() > SCREENSIZEY);
if (!on){
fireLaserOn(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,in_space));
on = true;
}
if (in_space && !last_in_space){
fireLaserEntered(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,true));
}
//                System.out.println(result.getX() + "," + result.getY());
//                System.out.println(last_point.getX() + "," + last_point.getY());
//                System.out.println("----------------------");
if (result.getX()!=last_point.getX() || result.getY()!=last_point.getY()){
fireLaserMoved(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,in_space));
}
else{
fireLaserStable(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,in_space));
}
if (!in_space && last_in_space){
fireLaserExited(new LaserEvent(result, new Point(ave.x, ave.y), last_point, last_raw_point,false));
}
last_time = 0;
last_point = new Point2D.Double(result.getX(), result.getY());
}
else if (last_time==5){//if it's been five frames since we last saw the pointer, then it must have disappeared
if (in_space){
fireLaserExited(new LaserEvent(-1,-1, ave.x, ave.y, (int)last_point.getX(), (int)last_point.getY(), (int)last_raw_point.getX(), (int)last_raw_point.getY(),in_space));
}
fireLaserOff(new LaserEvent(-1,-1, ave.x, ave.y, (int)last_point.getX(), (int)last_point.getY(), (int)last_raw_point.getX(), (int)last_raw_point.getY(),in_space));
on = false;
in_space = false;
}
if (ave.x>0 || ave.y>0 && ave.x<CAMERASIZEX && ave.y<CAMERASIZEY)
fireLaserRaw(new LaserEvent(-1,-1, ave.x, ave.y, -1,-1, (int)last_raw_point.getX(), (int)last_raw_point.getY(),in_space));
last_time++;//increment the last_time. usually it gets set to 0 every frame if the laser is there
last_raw_point = new Point(ave.x,ave.y);//set the last_point no matter what
last_in_space = in_space;
/**************************************************************************************/
/********************************End Laser Detection Code*****************************/
/*************************************************************************************/
}
public int unsignedByteToInt(byte b) {
return (int) b & 0xFF;
}

This next part is pretty standard code for adding event listeners. You can see which laser events are getting passed. I intentionally made it similar to how mouse listeners are used.

Vector<LaserListener> laserListeners = new Vector<LaserListener>();
public void addLaserListener(LaserListener l){
laserListeners.add(l);
}
public void removeLaserListener(LaserListener l){
laserListeners.remove(l);
}
private void fireLaserOn(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserOn(e);
}
}
private void fireLaserOff(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserOff(e);
}
}
private void fireLaserMoved(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserMoved(e);
}
}
private void fireLaserStable(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserStable(e);
}
}
private void fireLaserEntered(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserEntered(e);
}
}
private void fireLaserExited(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserExited(e);
}
}
private void fireLaserRaw(LaserEvent e){
Enumeration<LaserListener> en = laserListeners.elements();
while(en.hasMoreElements()){
LaserListener l = (LaserListener)en.nextElement();
l.laserRaw(e);
}
}
  • This algorithm is extremely basic and not robust at all. By just averaging the points above the threshold, I don’t take into consideration if there are multiple lasers on the screen. I also don’t filter out errant pixels that are above the threshold by accident, and I don’t filter out light sources that aren’t moving. A more robust algorithm would do a better job and possibly identify multiple laser pointers.
  • I’m not the first person that has done this, though from what I can tell this is the first post that goes into so much detail and provides code. I have seen other people do this using other platforms, and I have seen other people try to sell this sort of thing. In fact, this post is sort of a response to some people who think they can get away with charging thousands of dollars for something that amounts to a few lines of code and less than $40 in hardware.
  • Something I’d like to see in the future is a projector with a built in camera that is capable of doing this sort of thing natively, perhaps even using the same lens system so that calibration would be moot.
  • You may have seen references to it in this post already, but one thing I mention is having the camera see outside the projected area and how that can be used for additional input. Because the laser pointer doesn’t have buttons, its input abilities are limited. One way to get around this is to take advantage of the space outside the projected area. For example, you could have the laser act as a mouse while inside the projector area, but if the laser moves up and down the side next to the projected area it could act as a scroll wheel. In a simple paint application, I had the space above and below the area change the brush color, and the sides changed the brush thickness or changed input modes. This turns out to be extremely useful as a way of adding interactivity to the system without requiring new hardware or covering up the projected area. As far as I can tell, I haven’t seen anyone else do this.
  • I have seen laser pointers with buttons that go forward and backward in a slideshow and have a dongle that plugs into the computer. These are much more expensive than generic laser pointers but could be reused to make the laser pointer much more useful.
  • Just like a new mouse takes practice, the laser pointer takes a lot of practice. Not just smooth and accurate movement, but turning it on and off where you want to. Typically releasing the power button on the laser pointer will cause the point to move a slight amount, so if you’re trying to release the laser over a small button, that has to be considered so that the laser pointer goes off while over the right spot.
  • This was a cool project. It took some time to get everything working, and I’m throwing it out there. Please don’t use this as a school project without adding to it. I’ve had people ask me to give them my source code so they could just use it and not do anything on their own. That’s weak and you won’t learn anything, and professors are just as good at searching the web.
  • If you do use this, please send me a note at laser@bobbaddeley.com. It’s always neat to see other people using my stuff.
  • If you plan to sell this, naturally I’d like a cut. Kharma behooves you to contact me to work something out, but one of the reasons I put this out there is I don’t think it has a lot of commercial promise. There are already people trying to, and there are also people like me putting out open source applications.
  • If you try to patent any of the concepts described in this post, I will put up a fight. I have youtube videos, code, and witnesses dating back a few years, and there is plenty of prior art from other people as well.

It’s no secret that I love my car. It’s been extremely dependable, has treated me very well, has a good personality and an adventurous attitude, and doesn’t ask for much (it’s a 2000 Chrysler Neon, and yes, I mean Chrysler). I’ve had it for almost 10 years and put over 100,000 miles on it myself in addition to the 20,000 that were on it when I got it used. If I were to get another car, I’d look for something exactly like the one I have.

But once in a great while it will have small issues. Once a wiring harness broke loose and cause the rear lights to go out. Other than that, it’s worked very well and could probably go for another hundred thousand miles without problem.

About a week ago I put my key in the door to unlock it and found that it turned freely. I had to unlock the passenger side and then unlock the driver side from the inside. For a few days I drove around without locking the door. Monday I finally got an opportunity to examine the problem. I was able to disassemble the door relatively easily. It was fairly straightforward except for the part where the window handle was connected, but I managed to find the service manual online and pop the handle off. Then I could get in to the Romford Locks mechanism and see where the problem was. It didn’t take long to discover the problem. The rod that connects the lock mechanism to the key had slipped off. The piece that held it on was missing. Figuring it was probably at the bottom of the door frame, I felt around and identified it. Yep, there was the problem.

That piece should be symmetrical. The piece that had broken was about 2 millimeters wide and because of that the thing slipped off the lock and was no longer holding the rod in place. It didn’t take much jostling for the rod to fall out.

I didn’t have any parts exactly like that, and I was up to the challenge of fixing it with parts that I had around the house. I made a crude washer out of a piece of scrap tin from a can. Then I made a springy curl of stiff wire that would take the place of the part that broke. I installed onto the lock mechanism and played with it a little to make sure it was stuck pretty well. I tried to take it off to adjust it a little, but couldn’t even get it off without some serious effort, so I just left it on there. I tested it thoroughly before putting the door back together. With the door completely reassembled, I tested it out some more, and it worked exactly like it had originally worked.

I’m kind of glad that my car is mostly mechanical and doesn’t have a lot of electronic parts. Electronic locks or windows would have made this a much more difficult operation. I’m also happy that I was able to build the parts that I needed from scratch and basic tools. Plus, I always enjoy doing things with my hands and seeing the results and saving money in the process.

Saturday I had a party for work, so I thought I’d throw together a cheesecake. I used the recipe I’ve used a few times in the past. Better Homes and Garden, by the way. Rather than melt some semi-sweet chocolate, I went instead with a bottle of Hershey chocolate sauce. I was getting to the bottom of the bottle, and I noticed that as I squeezed it would spatter out in neat randomness. So on top of the swirled cheesecake filling I sprayed the sauce, not realizing that it would ultimately be the cause of a huge problem.

The cake cooked fine, and after I took it out of the oven it still looked good, but the spots where there was sauce looked a little weak and were starting to crack. When the top of a cheesecake begins to crack, the cracks turn to crevasses as it cools down, and that’s exactly what happened. There were three pretty big cracks as it cooled. I looked around for something to fix it and found a block of milk chocolate that Erin had given me. I thought I’d shave chocolate onto the top to see if it would cover up the cracks. But the chocolate shavings weren’t as silky smooth as I thought, and they broke up into smaller pieces than I expected. It was time to go to the party, so I made the decision not to bring the cheesecake. That turned out to be ok, though, because there was already a lot of food there.

Aesthetically, the cheesecake was mediocre. It tasted great, though. Here’s a picture, but remember, it only looks average; it’s too bad I can’t make the web lick-able.

I’ve been working on my laser pointer recently, and in the course of my work I made an interesting biological discovery. Laser pointers are ridiculously bright. You can shine them on a finger and they’ll light up the finger so that you can see it from the other side.

You’re not supposed to shine lasers directly into the eye because they are so bright. Most laser pointers, though, are class 3 or lower, meaning they won’t do permanent damage if exposed briefly. Still, my eyes are not something I like to risk, so I don’t shine it into my eye intentionally.

However, in the course of fiddling with the pointer while waiting for a process, I held it against my temple, turned it on absentmindedly, and saw some red. At first I thought that the laser must be reflecting off of something and getting into my eye somehow, but it didn’t make any sense. It was a blurry red light and was clearly more intense closer to my temple. The light wasn’t escaping outside and reflecting off anything, either, because the laser was directly against my temple. I concluded that the light was actually traveling through my temple and into my eyeball and hitting my retina without going through my iris.

This is not a huge discovery. In fact, you can simulate the effect quite easily with nothing but a bright environment. Close your eyes. Then put your hands in front of your eyes. It gets even darker. With just your eyes closed, light is still passing through the lids and into the eye. With the temple, it just takes more light to get through to the retina.

I’m not too concerned about losing my eyesight from doing this, but I’m not going to keep doing it. It’s interesting that I can see things without having it go through the front of my eye.