This morning I began working out the issues with the processing code. Much to my surprise we were able to work out this problem relatively quickly. As of today we successfully reproduced the Matlab processing code in C++, plus (heh) ours is much prettier. Our code also presents a performance increase from 3-4 frames per second to 16 with no optimizations.
The rush received as a result of this accomplishment is rivaled only by that known to those who witnessed Ronald Reagan's single handed destruction of the Berlin wall. The freedom in the air was palpable in both situations. On a more serious note, today was excellent, much candy was eaten.
Looking forward to finalizing and practicing our presentation tomorrow.
I'm an intern at the Rochester Institute of Technology working in the Center for Imaging Science in the Multi - Camera Array lab. I do stuff.
Wednesday, July 31, 2013
Tuesday, July 30, 2013
08 Bug fixes and presentation stuff
It was quite an eventful day! We made great progress today by working out one of the major bugs we experienced yesterday that was preventing the continuous looping of our system, and hence processing of our images. After working that bug out we were able to almost replicate the functionality of the Matlab code.
This afternoon we also worked on our PowerPoint with Joe. That was swell. I'm actually quite excited for the symposium Friday.
Lunch today was also very good. I had a cheeseburger. And drank some mountain dew. Tasted like freedom and liberty. God bless.
This afternoon we also worked on our PowerPoint with Joe. That was swell. I'm actually quite excited for the symposium Friday.
Lunch today was also very good. I had a cheeseburger. And drank some mountain dew. Tasted like freedom and liberty. God bless.
Monday, July 29, 2013
07 Moving forward
After the morning meeting we finally managed to put together the GUI and calibration and processing code. This is a major step in the right direction as the majority of our remaining tasks are tuning and tweaking outside of escaping the file stream bottleneck.
Tomorrow we will be investigating what is necessary to take our current code and adapt it to be repeated infinitely (in theory). Once this is complete we can start to work on evading the use of a share file, and at that point I anticipate a vary large performance jump for the multi-camera array.
yay.
Tomorrow we will be investigating what is necessary to take our current code and adapt it to be repeated infinitely (in theory). Once this is complete we can start to work on evading the use of a share file, and at that point I anticipate a vary large performance jump for the multi-camera array.
yay.
Friday, July 26, 2013
06 Back to business
Today I quickly finished the depth alignment code I was struggling with yesterday. Along with this step forward I wrapped the calibration and depth processing functionality into one "Processor" class which with a few more tweaks will be ready to use in the final product.
Tomorrow will probably revolve around working out and completing said tweaks and starting work on image streaming from cameras directly from USB to eliminate raspberry pi frame rate bottlenecks.
Twas an excellent day.
Tomorrow will probably revolve around working out and completing said tweaks and starting work on image streaming from cameras directly from USB to eliminate raspberry pi frame rate bottlenecks.
Twas an excellent day.
Thursday, July 25, 2013
05 Returning from WV
This morning I created a small application to help work with the camera array. Afterwards the majority of today was spent getting caught up with all the work Elizabeth had completed. Currently I am working on the depth adjustment code. Overall it was a pretty 'meh' day, however it was good to return after a rather long vacation, tomorrow should be more productive.
Friday, July 12, 2013
04 Week 1 ends
Well it's the end of the first week as a CIS intern. I am really enjoying my work in the lab and spent the majority of today cleaning up and packing the calibration code into a easily reusable format. Lots of today was spent indicating calibration points and testing to assure that the alignment was working properly. Calibration data is now stored in a text file, so in the future it is not mandatory to indicate these points each run.
It felt very great to test the setup using actual array pictures, this helped to show how truly applicable this code would be in the final project. Today Joe, our adviser, also came in to help us set up deadlines and goals for the project, giving us a good idea of the pace our work needs to be set at.
Sadly this will be my last blog post for quite some time as I'm going on vacation for all next week and half the week after that. I'll be back at it 7/25/13 assuming all goes well.
It felt very great to test the setup using actual array pictures, this helped to show how truly applicable this code would be in the final project. Today Joe, our adviser, also came in to help us set up deadlines and goals for the project, giving us a good idea of the pace our work needs to be set at.
Sadly this will be my last blog post for quite some time as I'm going on vacation for all next week and half the week after that. I'll be back at it 7/25/13 assuming all goes well.
Thursday, July 11, 2013
03 Good Progress
Today was a very productive day! After the morning meeting I was able to successfully setup the OpenCV library and start testing its functionality. By mid day I was able to replicate the Affine transformation that the multi-camera array uses to produce its synthetic aperature effect.
In more exciting news, I had a buffalo chicken tender sub for lunch, it was delicious, and mdew too.
After lunch another significant step forward happened when after looking into opencv's highgui classes I found a way to reproduce the array's calibration routine. This is working well with test images and is likely to be the code that will be used in the finished product.
Here is picture of the calibration in action:
In more exciting news, I had a buffalo chicken tender sub for lunch, it was delicious, and mdew too.
After lunch another significant step forward happened when after looking into opencv's highgui classes I found a way to reproduce the array's calibration routine. This is working well with test images and is likely to be the code that will be used in the finished product.
Here is picture of the calibration in action:
Wednesday, July 10, 2013
02 Starting work with OpenCV and raspberry pi's
Well today I spent the first portion of today attempting to install a web server to grab images from a raspberry pi. This proved fruitless.
Afterwords I looked into setting up OpenCV to replace the Matlab matrix and image functions. This seems promising, however so far I've been unable to get the library set up due to operating system issues and a number of other things. Lunch today was fun, we saw a couple TED talks and ate some complimentary pizza.
Looking forward to figuring out the library issues tomorrow.
Afterwords I looked into setting up OpenCV to replace the Matlab matrix and image functions. This seems promising, however so far I've been unable to get the library set up due to operating system issues and a number of other things. Lunch today was fun, we saw a couple TED talks and ate some complimentary pizza.
Looking forward to figuring out the library issues tomorrow.
Tuesday, July 9, 2013
01 What sort of 'stuff' do I do
So, although I've already done a post today, this one is specifically tailored to today, as the following days blog posts will be. So this post is about what I do, or will be doing, to help the multi - camera array project.
My "area of expertise" is programming and computer science. So me and another one of the interns (Elizabeth) are working to help optimize the current code, and potentially re-write the current processing in another, faster programming language. So far today specifically I've looked into code regarding QTcp servers on Raspberry Pi's to improve the performance of synchronized image fetching from the array, and talked to a grad student TA, who has been involved in the existent code, to get a better understanding of the existant processing code.
This project seems like it will be a decent challenge and may require I learn a good deal of programming revolving around image processing libraries and UI and server libraries (OpenCV and QT). Although a challenge I hope to accomplish the task of speeding up the processing fps (frames per second) to around 30. It will make for a very busy and challenging summer.
My "area of expertise" is programming and computer science. So me and another one of the interns (Elizabeth) are working to help optimize the current code, and potentially re-write the current processing in another, faster programming language. So far today specifically I've looked into code regarding QTcp servers on Raspberry Pi's to improve the performance of synchronized image fetching from the array, and talked to a grad student TA, who has been involved in the existent code, to get a better understanding of the existant processing code.
This project seems like it will be a decent challenge and may require I learn a good deal of programming revolving around image processing libraries and UI and server libraries (OpenCV and QT). Although a challenge I hope to accomplish the task of speeding up the processing fps (frames per second) to around 30. It will make for a very busy and challenging summer.
00 The very second day!
So, as interns here at RIT we are required to keep blogs. Thats pretty neat, no real complaints. And as today was the first second day, this is my first blog post.
I work in the Multi - Camera Array lab with two other interns, and two undergrad CIS students. The multi - camera array is, simply put, a bunch of cameras, in our case 6, put in a (or in some cases several) row. When the images produced by the array are processed and stitched together , as a result of an effect deemed synthetic aperture, the resulting image can show objects in focus despite occlusions in the way of what would have been a single camera.
This technology has potential applications in security and numerous other fields (in other words I'm sure someone will think of other applications, who knows maybe I will). Its a pretty cool phenomenon regardless of its application.
So thats the project, and the gist of the lab.
I work in the Multi - Camera Array lab with two other interns, and two undergrad CIS students. The multi - camera array is, simply put, a bunch of cameras, in our case 6, put in a (or in some cases several) row. When the images produced by the array are processed and stitched together , as a result of an effect deemed synthetic aperture, the resulting image can show objects in focus despite occlusions in the way of what would have been a single camera.
This technology has potential applications in security and numerous other fields (in other words I'm sure someone will think of other applications, who knows maybe I will). Its a pretty cool phenomenon regardless of its application.
So thats the project, and the gist of the lab.
Subscribe to:
Posts (Atom)