Category Archives: Electrical Engineering and Computer Science

Learning without a brain

Instructions for how to win a soccer game:

Score more goals than your opponent.

Sounds simple, but these instructions don’t begin to explain the complexity of soccer and are useless without knowledge of the rules of soccer or how a “goal” is “scored.” Cataloging the numerous variables and situations to win at soccer is impossible and even having all that information will not guarantee a win. Soccer takes teamwork and practice.

Researchers in robotics are trying to figure out how to make a robot learn behaviors in games such as soccer, which require collaborative and/or competitive behaviors.

How then would you teach a group of robots to play soccer? Robots don’t have “bodies,” and instructions based on human body movement are irrelevant. Robots can’t watch a game and later try some fancy footwork. Robots can’t understand English unless they are designed to. How would the robots communicate with each other on the field? If a robot team did win a soccer game, how would they know?

Multiple robot systems are already a reality in automated warehouses.

Although this is merely an illustrative example, these are the types of challenges encountered by folks working to design robots to accomplish specific tasks. The main tool for teaching a robot to do anything is machine learning. With machine learning, a roboticist can give a robot limited instructions for a task, the robot can attempt a task many times, and the roboticist can reward the robot when the task is performed successfully. This allows the robot to learn how to successfully accomplish the task and use that experience to further improve. In our soccer example, the robot team is rewarded when they score a goal, and they can get better at scoring goals and winning games.

Programming machines to automatically learn collaborative skills is very hard because the outcome depends on not only what one robot did, but what all other robots did; thus it is hard to learn who contributed the most and in what way.

Our guest this week, Yathartha Tuladhar, a PhD student studying Robotics in the College of Engineering, is focused on improving multi-robot coordination. He is investigating both how to effectively reward robots and how robot-to-robot communication can increase success. Fun fact: robots don’t use human language communication. Roboticists define a limited vocabulary of numbers or letters that can become words and allow the robots to learn their own language. Not even the roboticist will be able to decode the communication!

 

Human-Robot collaborative teams will play a crucial role in the future of search and rescue.

Yathartha is from Nepal and became interested in electrical engineering as a career that would aid infrastructure development in his country. After getting a scholarship to study electrical engineering in the US at University of Texas Arlington, he learned that electrical engineering is more than developing networks and helping buildings run on electricity. He found electrical engineering is about discovery, creation, trial, and error. Ultimately, it was an experience volunteering in a robotics lab as an undergraduate that led him to where he is today.

Tune in on Sunday at 7pm and be ready for some mind-blowing information about robots and machine learning. Listen locally to 88.7FM, stream the show live, or check out our podcast.

Don’t just dream big, dream bigger

If you’ve purchased a device with a display (e.g. television, computer, mobile phone, handheld game console) in the last couple decades you may be familiar with at least some of the following acronyms: LCD, LED, OLED, Quantum LED – no, I did not make that up. Personally, I find it all a bit overwhelming and difficult to keep up with, as the evolution of displays is so rapidly changing. But until the display replicates an image that is indistinguishable from what we see in nature, there will always be a desire to make the picture more lifelike. The limiting factor of making displays appear realistic is the number of colors used to make the image. Currently, not all color wavelengths are used.

Akash conducting research on nanoparticles.

This week’s guest, Akash Kannegulla studies how light interacts with nanostructure metals for applications to advance display technology, as well as biosensing. Akash is a PhD candidate in the Electrical Engineering and Computer Science program with a focus in Materials and Devices in the Cheng Lab. Exploiting the physical and chemical properties of nanoparticles, Akash is able to work toward the advancement of display and biosensing technologies.

When shining light on metals, electrons and photons interact and oscillate to create a surface plasma, or “electron cloud”. Under specific conditions, when fluorescent dye is excited with UV light on the surface plasma, electrons move to higher atomic levels. When the electrons return to lower atomic levels, energy is released in the form of light. This light is 10-100X brighter than it would be without the use of fluorescent dyes. With this light magnification, less voltage is used to produce a comparable brightness level. This has two main benefits; first consumer products can use less energy to produce the same visual experience, so we can significantly decrease our carbon footprint. Second, these unique conditions can be amplified at the nano-scale, which means smaller pixels and more colors that can be produced so our TV screens will look more and more like the real world around us. These new advancements at the nano-scale have extremely tight tolerances in order for it to work; however, in this case, not working can also provide some incredible information.

This technology can be applied in biosensing to detect mismatches in DNA sequences. A ‘mismatch’ in a DNA sequence has a slightly different chemical bond, the distance between the atoms is ever so slightly different than what is expected, but that tiny difference can be detected by how intense the light is – again the nanoscale is frustratingly finnicky at how precise the conditions must be in order to get the expected response – in this case light intensity. So when we get a ‘dim’ spot, it can be indicative of a mismatched DNA segment! Akash predicts that in a just a few years, this nanotechnology will make single nucleic acid differentiations detectable on with sensing technology on a small chip or using a phone camera, rather than a machine half the size of MINI Cooper.

Akash, the entrepreneur, with his winning certificate for the WIN Shark Tank 2018 competition.

In addition to Akash’s research, he has spent a significant portion of his graduate career investing in an award-winning start-up company, Wisedoc.This project was inspired by the frustration Akash felt, and probably all graduate students and researchers, when trying to publish his own work and found himself spending too much time formatting and re-formatting rather than conducting research. By using Wisedoc, you can input your article content into the program and select a journal of interest. The program will then format your content to the journal’s specifications, which are approved by the respective journal’s editors to make publishing academic articles seamless. If you want to submit to another journal, it only takes a click to update the formatting. Follow this link for a short video on how Wisedoc works. And for those of us with dissertations to format, no worries – Wisedoc will have an option for that, too. Akash notes that Wisedoc would not have been possible without the help of OSU’s Advantage Accelerator program, which guides students, faculty, staff, as well as the broader community through the start-up process. Akash’s team has won the Willamette Innovators Network 2018 Shark Tank competition, which earned them an entry into the Willamette Angel Conference, where Wisedoc won the Speed Pitch competition. If you are as eager as I am to checkout Wisedoc, the launch is only a few months away in December 2018!

The soon-to-be Dr. Akash Kannegulla – his defense is only a month away – is the first person in decades from his small town at the outskirts of Hyberabad, India, to attend graduate school. Akash’s start in engineering was inspired by his uncle, an achieved instrumentation scientist. Not knowing where to start, Akash adopted his uncle’s career choice as an engineer, but took the time to thoroughly explore his specialty options while an undergraduate. A robotics workshop at his undergraduate institution, Amirta School of Engineering in Bangalore, India, sparked an interest in Akash due to the hands-on nature of the science. Akash explored undergraduate research opportunities in the United States landing on a Nano Undergraduate Research Fellowship from University of Notre Dame. During the summer of 2013, Akash studied photo induced re-configurable THz circuits and devices under the guidance of Dr. Larry Cheng and Dr. Lei Liu. Remarkably, Akash conducted research resulting in a publication after only participating in this four-week fellowship. After graduating with the Bachelor of Technology in Instrumentation, Akash decided to come to Oregon State University to continue working with Dr. Cheng as a PhD student.

After defending, Akash will be working at Intel Hillsboro, as well as preparing for the launch of Wisedoc in December. And if that doesn’t sound like enough to keep him busy, Akash has plans for two more start-ups in the works.

Join us on Sunday, July 22 at 7 PM on KBVR Corvallis 88.7 FM or stream live to learn more about Akash’s nanotechnology research, start-up company, and to get inspired by this go-getter.