Friday, July 13, 2012

Week 2

Our goals for this week centered around the adaptive optics control loop itself.  Last week we worked on getting the exposure and alignment set and now we were ready to experiment with our controller to pick a good feedback gain, to find a criteria to decide when the control loop was done, and to find a way to drive to a desired image (rather than to a flat image). 

We decided to figure out a way to drive to a desired image first so we could test our controller gain and end criteria with different images.  We found that our lab  
had some softare that generated Zernike polynomials, which are important in optics.   They make images as shown in the circles below.  We made it so our software could pick which one we wanted to use for our control image.  We also made an image of the letter M just by editing the rows in columns of a 12 by 12 matrix.  We chose M because Michelle's initials and my maiden name initials are both MM.  We make the image by sending our image matrix as commands to the DM (deformable mirror) and then when the light bounces off of it, it gets deformed in such a way that the wavefront sensor "sees" the image we have.  It actually senses the ways the wavefront of our image is different from a flat image and then it converts that into an image to display.

Once we had the ability to make different images we tried varying the gain value for the controller.  We saw that sure enough if you make the gain too high it goes unstable and if you make it too low it takes a long time to get to the image you want.  We finally settled on 40 which is what Dr. Bifano thought might be a good place to start.  At first we just let the loop run for 20 times.  Then we tried to make the controller smarter about deciding when the image was good enough to stop.  That turned out to be a good question.  Our feedback is in slopes measured by the wavefront sensor but we'd like to specify our end criteria in terms of differences between the desired and actual images.  We don't really have this directly.  We're not using a camera that takes pictures.  Instead we just have these slopes.  In talking to Dr. Bifano, it led us to realize that we should find out the relationship between the command we send to the DM and the actual number of micrometers it moves.  (Then those micrometers are directly related to the wavefront aberrations).  So, he introduced us to a new piece of lab equipment, the interferometer.

An interferometer is a tool that can measure very small distances.  It looks like a big microscope, but it really is sending two beams of light, one that bounces off a mirror and one which bounces off what you are trying to measure.  The interference pattern that is formed when these two beams add together is used to make the measurement.  Here are some pictures of the interferometer itself and of some of the measurement tools on it that we used to find out how far the actuators move.  The video shows what the measurement scan looked like when we commanded the actuators to move in the shape of an M. 

The focusing of the interferometer is quite a process.  It sits on a floating table because when you are measuring something on the order of nanometers any sort of vibration is a disaster.  You can control the tilt of the platform on which your item to be measured sits, and the vertical distance to it.  As you adjust these you have to watch for the image to appear in focus on the little monitor shown in the upper right corner of the picture.  Once the image is focused, you fine tune the distance until some interference bars come into view.  Then you adjust the tilt in both directions to separate the bars and make them vertically and horizontally aligned.  It felt like we had witnessed a minor miracle when we saw it finally work.  

Once we got it aligned we tried poking the actuators (by commanding them with the computer) and then measuring how much they moved.  We experimented with moving just one compared with a 2 by 2 or 3 by 3 square.  Then we poked each of the 140 actuators to be sure they were working and to be sure they each moved about the same amount.  It takes a lot of patience!   Then we tried to see what commands we would need to send to the actuators to make the surface perfectly flat.  It took a lot of adjusting but we now have our flat reference for our controller and the conversion factor from command to actuator deflection.

Genius moment of the week:  When we came into work on the second day with our interferometer our system wouldn't talk to the deformable mirror.  We did everything we knew how to do about resetting the software and finally resorted to asking our ever helpful grad student, Chris.  He found the problem right away:  our USB cable was unplugged!  That was humbling.

No comments:

Post a Comment