Saturday, July 28, 2012

Week 4

Our New Laser
Looking back on it, this was quite a busy week.  Our progress on Monday was a little disappointing because we noticed that our laser was dying and not bright enough for our control loop anymore.  We switched to a new laser, which is red now instead of green.  Installing the new laser meant realigning a bunch of things which is time consuming.  We're getting better at it but it is still a lot of trial and error. 


We had two main accomplishments on our project this week.  First we made a glass wheel that spins in the path of the laser.  The glass has nail polish painted on it to distort the wavefront so we can try to see if the controller can correct for it.   The nail polish process took a lot of experimenting to get just right.  We wanted the thickness of the polish to vary by a few microns (thousandths of a millimeter).  We tried different brush strokes and tried thinning out the nail polish and then viewed it with the interferometer and with our wavefront sensor until we have the thickness variation we want. 

Our glass wheel and its motor control

We attached the glass to a motor so that we can see how fast it can spin and still have the motor keep up with the correction.   We tried a few different motors but they all turned too fast.  Chris (a grad student in our lab) donated this motor to us that he has used in a class project last year.  It is perfect because it spins slowly, is small enough to mount in the space we wanted to put it, and its speed can be software controlled.  In the picture you can see the little motor controller board. 

Our second main accomplishment was the user interface for our controller.  It used to just pop up windows to display different graphs as it was running.  It wasn't possible to change any parameters except if you changed the software and reran.  We want the optics system to be used for a teaching tool for Dr. Bifano so we wanted some screens that showed what was going on and let the user step through all of the processes in the software and be able to see what was happening at each step and interacti with it if possible.  We had tried to use MATLAB's user interface software (GUIDE) before and it was really cumbersome and created a lot of extra code.  This time we tried to write the software for the screens without using GUIDE and it was a lot easier to make what we wanted.  It is really fun to see it all come together.  Here are a few of the screen shots:
Setting the Exposure Level
"Poking" the Mirror

Controlling the Wavefront


We also went back to the clean room to finish making our wafers. This week we put them upside down in a vacuum chamber and then evaporated titanium and then gold onto them. The gold was 250 angstroms thick. Once the gold was on there we washed the wafers in acetone and everywhere there was photoresist the photoresist (and the gold and titanium on top of it) washed off and we were just left with our pattern in gold. It was really amazing because the pattern appeared clearer than at any point in the process. It looks just like a photograph. In the picture below we had just taken this tray out of the vacuum chamber and we're about to take our wafers (the circles) out to wash them. Pictured are Paul, who runs the lab, and Valerie, an RET from Sharon, MA.

One of the RET teams is working on a weather balloon that they will launch next week. It is a prototype for future launches with students. They have worked on a set of experiments that students would do leading up to the launch. On Friday the rest of the RET teachers posed as their class and did a couple of the experiments. In the first one we wired up thermocouple circuits on breadboards. These breadboards will be carried up by the balloon and will record temperature during the trip. Then we went to the computer lab and ran Monte Carlo simulations that would predict where the balloon would land. There was a website that ran the simulation given the launch location (which will be Mt. Greylock) and the launch date and time. We each ran the simulation with 5 different times around the planned time and noted where it said the balloon would land. Then we each marked our landing locations with push pins on a big map. They seemed to center around Brattleboro. It will be exciting on launch day to try to retrieve the balloon. It will have a GPS on it so if that works it will help out a lot.


Friday, July 20, 2012

Week 3

This week included a couple of special visits.  First Michelle and I went inside the clean room with four other RET teachers.  BU has a Class 100 clean room that is used for photolithography.  The Class 100 means that there are no more than 100 particles per cubic foot of air (as compared with 200,000 particles you would normally find).  In order to keep the clean room free of any dust particles anyone who enters it has to put on a lot of protective equipment.  First we put on white slippers, a jumpsuit, a hairnet, two layers of rubber gloves and safety glasses.  That amount of gear let us enter a Class 1000 cleanroom.  Once we got in there we put white boots over our white slippers and a white cloth "helmet" over our hairnet.  From there we could go into the Class 100 room.  Here is a picture of Michelle and me with all of our gear on.
Why are we yellow?

Photolithography is the process of transferring an image onto a silicon wafer.  Before coming to the lab we each designed a 3.5 inch diameter mask for our wafers.  We just used Word and could put text or pictures on it.  Paul Mak, who works in the lab printed our masks onto what looks like an overhead transparency.  When we are done we will get to keep our wafer which will have our image etched in gold on it!  It will take us two sessions to make it.  In our first session we poured a dark purply-red liquid on our wafer called photoresist and put it in a machine that spun it at high speeds so we got a nice thin layer of photoresist on the wafer.  We cooked it on a hot plate and then placed our mask on top of it and put it in a machine that shines bright UV light on it.  Wherever the mask was clear the photoresist would break down.  Wherever it was black it was protected.  The yellow light in the room keeps the photoresist from breaking down when we have it out in the open.  Then we put it in some developing fluid that washed away the photoresist on the places that were exposed to the UV, and baked it some more.  Next week we will return to the lab to do the part of the process that puts gold wherever the photoresist washed away, and then we will wash away the remaining photoresist and have our wafer with our design in gold.

Adaptive Optics in Action

Our second trip of the week was to the Joslin Diabetes Center.  They have an adaptive optics system there that they are using to image the retina.  They can track the progress of eye disease by counting photoreceptors that have died rather than by waiting for the loss of photoreceptors to affect a person's vision.  We met Sonja, a medical student from Vienna who was performing the tests on the patients, and Steve, an engineer from Boston Micro Machines, the company that makes the deformable mirror.  He and Dr. Bifano were there to make some adjustments to the system so it would work with a backup laser until the full-function replacement laser comes in.  Here we are with Dr. Bifano and the Joslin adaptive optics system.



Lastly, we made a movie about our experience so we can show our students when we get back to school in the fall.


Friday, July 13, 2012

Week 2

Our goals for this week centered around the adaptive optics control loop itself.  Last week we worked on getting the exposure and alignment set and now we were ready to experiment with our controller to pick a good feedback gain, to find a criteria to decide when the control loop was done, and to find a way to drive to a desired image (rather than to a flat image). 

We decided to figure out a way to drive to a desired image first so we could test our controller gain and end criteria with different images.  We found that our lab  
had some softare that generated Zernike polynomials, which are important in optics.   They make images as shown in the circles below.  We made it so our software could pick which one we wanted to use for our control image.  We also made an image of the letter M just by editing the rows in columns of a 12 by 12 matrix.  We chose M because Michelle's initials and my maiden name initials are both MM.  We make the image by sending our image matrix as commands to the DM (deformable mirror) and then when the light bounces off of it, it gets deformed in such a way that the wavefront sensor "sees" the image we have.  It actually senses the ways the wavefront of our image is different from a flat image and then it converts that into an image to display.

Once we had the ability to make different images we tried varying the gain value for the controller.  We saw that sure enough if you make the gain too high it goes unstable and if you make it too low it takes a long time to get to the image you want.  We finally settled on 40 which is what Dr. Bifano thought might be a good place to start.  At first we just let the loop run for 20 times.  Then we tried to make the controller smarter about deciding when the image was good enough to stop.  That turned out to be a good question.  Our feedback is in slopes measured by the wavefront sensor but we'd like to specify our end criteria in terms of differences between the desired and actual images.  We don't really have this directly.  We're not using a camera that takes pictures.  Instead we just have these slopes.  In talking to Dr. Bifano, it led us to realize that we should find out the relationship between the command we send to the DM and the actual number of micrometers it moves.  (Then those micrometers are directly related to the wavefront aberrations).  So, he introduced us to a new piece of lab equipment, the interferometer.

An interferometer is a tool that can measure very small distances.  It looks like a big microscope, but it really is sending two beams of light, one that bounces off a mirror and one which bounces off what you are trying to measure.  The interference pattern that is formed when these two beams add together is used to make the measurement.  Here are some pictures of the interferometer itself and of some of the measurement tools on it that we used to find out how far the actuators move.  The video shows what the measurement scan looked like when we commanded the actuators to move in the shape of an M. 






The focusing of the interferometer is quite a process.  It sits on a floating table because when you are measuring something on the order of nanometers any sort of vibration is a disaster.  You can control the tilt of the platform on which your item to be measured sits, and the vertical distance to it.  As you adjust these you have to watch for the image to appear in focus on the little monitor shown in the upper right corner of the picture.  Once the image is focused, you fine tune the distance until some interference bars come into view.  Then you adjust the tilt in both directions to separate the bars and make them vertically and horizontally aligned.  It felt like we had witnessed a minor miracle when we saw it finally work.  



Once we got it aligned we tried poking the actuators (by commanding them with the computer) and then measuring how much they moved.  We experimented with moving just one compared with a 2 by 2 or 3 by 3 square.  Then we poked each of the 140 actuators to be sure they were working and to be sure they each moved about the same amount.  It takes a lot of patience!   Then we tried to see what commands we would need to send to the actuators to make the surface perfectly flat.  It took a lot of adjusting but we now have our flat reference for our controller and the conversion factor from command to actuator deflection.

Genius moment of the week:  When we came into work on the second day with our interferometer our system wouldn't talk to the deformable mirror.  We did everything we knew how to do about resetting the software and finally resorted to asking our ever helpful grad student, Chris.  He found the problem right away:  our USB cable was unplugged!  That was humbling.

Friday, July 6, 2012

Week 1

What am I doing at BU?

This week I started my summer at BU as part of the RET (Research Experience for Teachers) program.  This program gives ten high school and middle school teachers the opportunity to work full time for six weeks of the summer in a lab in the Photonics Center at BU.  We are teamed in pairs, typically a veteran high school teacher with a new teacher or a middle school teacher.   I am teamed up with Michelle McMillan who teaches 6th grade science in NH.  She is originally from New Mexico.  She studied Physics and Astronomy in Arizona and then did a City Year in NH and liked it enough to stay in NH and to keep teaching. She went to UNH for her master's in teaching and just finished her first year with her own class.

Michelle and I are assigned to work with Dr. Bifano, a mechanical engineering professor and the director of the Photonics Center.   Dr. Bifano's research is in the field of adaptive optics.  He and his team have developed deformable mirrors, which are small mirrors whose surface can be computer controlled.  The mirror that we are working with has 144 actuators (12 by 12).   These mirrors have enabled the field of adaptive optics to improve  images of the retina, for example.  The problem with imaging the back of the eye is that wavefront of the light gets distorted as it travels through the eye itself.  The idea of adaptive optics is to measure that distortion and then deform a mirror with a shape that will make the wavefront smooth after it reflects off of the deformed mirror.  When the smooth wavefront reaches a camera then it will create a much improved image. 

Being on a College Campus Again

I visited a lot of college campuses these past few years with my two oldest as they decided where they wanted to go to college.  It made me wish I was the one going back to school.  So, it is fun to be able to back at college even if just for a summer.  I have been really impressed with the facilities at BU and with the people I have met.  We heard presentations from some undergraduates who are also doing research projects.  They got into these projects mostly by asking their professors if they could participate in research.  It made me want to be sure to tell my students to not just take classes when they get to college but to be sure to pursue these other opportunities.  To be honest I'm also having fun riding the train and being in Boston everyday.  I'm sure the commute would be tiresome after a while but it's fun for the summer.

Our Tasks This Summer

Michelle and I have several things we are working on this summer.  For one we are setting up an adaptive optics controller on a work bench that could be used as a demo to explain what adaptive optics is.  First we are just getting it to work and then we are going to see what we can demonstrate with it.  This demo could be used in Dr. Bifano's class that he teaches in the honors program.  It is really helping us understand what adaptive optics is ourselves.  We are also possibly working on using a wavefront sensor to measure aberrations in imaging the backside of a computer chip.  I'm not quite sure what that all entails yet.  Finally, Dr. Bifano has a bunch of neat labs/demos that he did with his photonics class last year that he would like refined.  We are going to see if we can improve on some of them. 

Our Lab


We are working on the 7th floor of the beautiful new Photonics building that is right on the Mass pike.  The view from the meeting area just outside our lab is incredible as we have nearly floor to ceiling windows that stretch 24 feet wide.  Pictures do not do it justice.  The lab itself has a couple of smaller rooms in which three graduate students have offices and then a big common lab area.  We have our own workspace in the common lab area.   The graduate students, Chris and Hari, have been extremely kind and helpful to us.  The third student is in China getting married this summer.

Our Progress This Week

I thought the first week would have been spent doing a lot of reading and trying to figure out what we are going to do.  Instead, Dr. Bifano spent a lot of time getting us up to speed and had a place ready for us to work so we were able to jump right in.  On our first day we recreated one of his lab demos which was to build a light bulb with some tungsten, argon, a coffee pot and a car battery charger.  We got it to light, though we had some smoke.  The good news is it was not enough smoke to set off the smoke alarm.  That could have been an embarrassing first day.

In the afternoon of the first day we set up the lasers, mirrors and lenses on the lab table for our adaptive optics system.  We learned about how to align the laser with the mirrors and lenses and also how to magnify images with two lens pairs (called "telescopes," but not the telescopes you would use to look at the moon).  The next day Dr. Bifano taught us about how adaptive optics works and helped us get all of the software running in our lab.  The controller runs on Matlab which fortunately Michelle and I have used before.

Thursday Dr. Bifano was on vacation and we were on our own to see if we could get some improvements made to the software.  When we first started it took a few attempts just to get everything set to run correctly, but then we figured out how to write the software to auto-adjust the exposure on the wavefront sensor. Now we can manually turn the neutral density filter and the software will automatically pick the right exposure value to use so the image isn't saturated.  It was very exciting and we were pretty proud.  We also figured out how to improve the slope display so the scaling is good and how to mask out the actuators that are outside of our field of view.  It was an amazing first morning on our own. 

Then in the afternoon we started to figure out how to align the pupil with the center of the deformable mirror.  It seemed like every time we thought we were onto something then we'd run it and get something other than we expected.  Sometimes it was because we had accidentally had the mirror "poked" when we set the reference (or some similar problem).  Also I personally kept getting confused between the wavefront sensor's image and the colormapped slope image and then x and y are reversed on some things but not others.  We could tell from our work that we needed to physically adjust the camera's position and Friday Chris helped us find a stand for it with knobs for fine adjustments.  We spend the rest of Friday working out all of the details on the aligning software and I think we have it understood and working for the most part.  We still want to find a way to figure out the radius automatically.  Finally we just printed out all of our code and decided to take a fresh look next week. 

It has been such a great experience already.  I felt like I learned a lot this week and I really enjoy working with everyone in our lab.  It has been especially great being teamed with Michelle.  I think tackling these tasks could have been overwhelming on my own, but it's really fun getting to figure out things together.