sharadsinha

Archive for the ‘Embedded Systems’ Category

Translational Research: What I learned doing (seemingly) mundane task of video annotation

In Design Methodologies, Education, Embedded Systems, Engineering Principles, Research and Development on November 27, 2016 at 3:04 PM

In the recent past I have been doing some work related to automatic video annotation. Videos that you and I take can be annotated with data about the contents of the video. The contents of the video can mean: objects, their types, their shape, background scene (moving or static), number of objects, static and in-motion objects, color of objects etc. One would like to keep a track of objects as the video progresses. Tracking helps in knowing when an object appeared in the scene and when it disappeared. All of the prior work on automatic video annotation is not really completely automatic [1], [2] etc.. They are semi-automatic at best and manual input and control is still required when annotating using these methods.

While doing this work, I developed a better understanding of some of the so called “automatic object tracking for surveillance” solutions out there in the market.  None of these solutions can ensure a complete hands-off scenario for humans. Humans still need to be involved and there are reasons for that.  At the same time, it is also possible to do everything in cloud (including human interaction) and claim it as “hands off for a user”. In this case, it is simply that the client is paying someone else to provide the service. It is not a stand-alone autopilot kind of system installed in a user’s premises. Real automatic video annotation is extremely hard, especially when the scene can change without any guarantees. If we add “video analytics” i.e. ability to analyse the video automatically to detect a certain set of activities, it again becomes very difficult to propose a general solution. So, assumptions are again made and these can be based on user requirements or can be domain specific (say tennis video analytics at Wimbledon). Here is a system which may be of interest to you: IBM’s Digital Video Surveillance Service and a few others described in the paper titled “Automated visual surveillance in realistic scenarios“.

Most of the research work makes certain assumptions either about the scenes or about the methods they use. These assumptions simply fail in real world scenarios. These methods may work under a “restricted real world view” made using a set of assumptions, but when assumptions fail, these methods become limited in applicability.

I believe this is a critical issue that many researchers who want to translate their work into usable products have to understand. This is where both strong theoretical and practical foundations in a discipline are needed: theory gives the methods and the tools, engineering tells you what can/cannot be done and the two can interact back and forth.

Advertisements

ChatBot: Cost Cutting at the cost of User Experience

In Design Methodologies, Education, Embedded Systems, Science & Technology Promotion and Public Policy on August 31, 2016 at 4:09 PM

Many of you may be familiar with chatbots. For those who aren’t, a chatbot is a computer program designed to have conversation with a human being (wikipedia). So, instead of talking to a real person, you talk to a computer program. The chatbot responds using artificial intelligence methods which can also include using databases. For instance, you can ask a chatbot on a merchant website to show you “shoes of size 5, blue in color, for sports and within 50 dollars”. You don’t have to search using a filter and set various thresholds. The chatbot will process your “textual/verbal (assuming there is speech recognition) input” and get you the results.

This technology is being promoted as the next major innovation to improve efficiency. The problem is that “efficiency” is itself a much abused term: I will go into the details some other time. Companies are being told that by using chatbots they can increase customer satisfaction, reduce manpower, automate customer interaction etc. This to me appears far-fetched conclusions. Human beings like to chat with human beings. That is one reason why Honda’s humanoid robot ASIMO and other similar attempts have failed to make a cut as care takers. They simply can’t replace nurses as of today. Artificial intelligence and care taker robots are too hyped as of now.

I agree that in certain circumstances where a conversation revolves around very structured data and can be very specific, chatbots may be useful. However, if we examine how humans search for something, we will find plenty of randomness in that. Most of the music videos that I have liked, I have bumped into them accidentally. This may not be the case with music aficionados but it is with me and others like me who explore certain things randomly and out of curiosity.

The reason I am writing this post is because I had a recent experience with a chatbot which was contrary to the selling point of chatbot providers or those who buy chatbot technology to improve customer engagement. Read the hilarious conversation below. Let me call the chatbot CB though it actually had a name on the service provider’s website.

Me: I wanted to know something.

CB: I am here to help (This is actually a standard beginning response by CB to every conversation that is started)

Me: I wanted to know how I can register for user authentication.

CB: I have found the following links that may be helpful: link-1, link-2     ( a sequence of hyperlinks)

Me (the hyperlinks were not helpful as I had already seen them under the FAQ): The website says that I will be auto-registered for authentication by March 2016. But this is August 2016. How will I be auto-registered now? What should I do?

CB: I have found the following links that may be helpful: link-1, link-2     ( a sequence of hyperlinks; the exact same answer as earlier)

Clearly, CB had no idea what I was talking about. The service provider had initiated some ad-hoc measures for some time to register users for authentication but had not updated whatever provided data to CB. The service provider had also failed to address the discrepancy in time. I understand that business requirements can lead to such temporary measures but it also means that the client support system must be accordingly updated. Otherwise, it makes little sense.  Apparently, CB also had no mechanism to learn about new business measures on its own either. Needless to say that I was not satisfied with the service. This example demonstrated to me not only some of the limits of chatbot technology but also the carelessness with which businesses go about buying and integrating chatbot technology thinking that it is a good alternative to manpower based customer interaction in order to cut cost and increase customer engagement. On the contrary, approaches like this result in customer dissatisfaction and duplication of work and efforts somewhere else. And this experience was with a well known service provider of citizen services!

Economic Cost of Badly Designed Software Systems

In Design Methodologies, Education, Embedded Systems, Engineering Principles on July 18, 2016 at 10:47 PM

The goal of every design activity, whether in computing or in some other field, is to come up with a system that serves some economic purpose. So, there are software and hardware systems that fly an airplane, that run our cars and power grids etc. In the past, people were distantly connected with these systems. They were mostly passive users with these systems being used for very specific purposes. However, there has been growing emphasis on using these systems, especially software systems, in governance and delivery of public services to citizenry. A lot of these public services are routine in nature and not particularly associated with life threatening risks (unlike power grids, cars etc.). Perhaps this is one reason why so many software systems for the delivery of public services are so poorly designed. Not only the design itself can be poor, but also the testing and validation for these systems is taken very lightly. I also feel that the testing and validation of these systems have to sync with the general life style and attitudes of the citizenry they serve. However, this is perhaps asking for the famous Swiss chocolates when not even a basic candy is available. 😛

Software systems that are used in industrial systems undergo rigorous testing and validation and still they can fail, crash, malfunction and give erroneous results. Studies conducted on the economic cost of such badly designed systems have reported losses of billions of dollars (see here and here). However, if badly designed software is used to provide citizen services, I am not aware of any report that analyzes the associated economic loss. You may be wondering what triggered this post or this conclusion. Well, in India, the government has mandated booking of cooking gas via dedicated hotline numbers which connect to a software system that manages the booking request, generation of customer invoice etc. However, during a recent such exercise, my father received a SMS that the booking has been cancelled (with an even funnier reason stated in the SMS: “Reason: Cancelled Booking”). He did not apply for cancellation. So, he had to drive to the vendor to inquire about this because a number of these vendors are not responsive enough to answer such questions on phone. The vendor replied that it is a software glitch and the booking will be processed shortly; the SMS can be ignored. Not only all this put stress on a citizen but also resulted in precious petrol going down the drain. Now multiply this one incident with another one lakh (a hundred thousand; a very conservative estimate) such cases a month and you get the picture. By the way, there are around 15 crore (i.e. 15 million) consumers of liquefied petroleum gas (LPG, the primary cooking gas in India) (see here).

Apart from the economic cost (whether big or small), such incident create friction and distrust in the system. This is a bigger danger as it cannot be put in monetary terms. Citizens begin to suspect service providers and begin to complain. All of this can be avoided if these social software systems are properly designed and the service providers educated about their proper usage. Unfortunately, this last part seems to be the least of concerns for many people involved in such exercises.

User Interface (UI) Design for Computer Systems

In Design Methodologies, Embedded Systems, Engineering Principles on January 13, 2016 at 8:03 PM

I believe that a proper User Interface (UI) design for computer systems is a must. All the technical, scientific and engineering wizardry that engineers may do while writing code and developing the system comes to a naught if the user interface is not human centric. There are countless examples of poor UI designs and one can find it at even those places which excel in research and development. Will it not be surprising if you happen to visit a renowned research lab or university where it takes time for a user to figure out how to use a machine to update some data on a card? It can be a bewildering experience.

When you go to an ATM machine to withdraw money, you are actually interacting with the machine through a user interface (UI). You insert your card, provide security details and choose options from the on-screen menu. This is all fine as long as you understand the languages used by the machine. These and similar other machines like queue number dispensers, ticket vending machines etc. are often used these days.

Among other things, I consider the choice of language as the most important decision that a user should be allowed to make before he provides other inputs to the machine for processing. If the user does not understand the current language and it takes a while to figure out how to set the language, it leaves the user with a bad experience.

The very-first view on the screen of such a machine should be related to the selection of a language. Now, the message there could be “Choose a language and a list of language is also shown simultaneously”. Of course, this assumes that  the user would understand the message “Choose a language” written in one of the supported languages. But I think a better option is to simply show all the supported languages without any message.  The user can then simply select one and thereafter the usual process follows. Such a design would work best with ATM machines, ticket vending machines etc. These are machines with which a user interacts instead of simply relying on it for information. For instance, the speedometer display of your car just provides you with information; you do not interact with it. For such interfaces, other UI designs will be suitable.

The problem with UI design in many systems is that it is done by engineers and managed by managers who have little training in this sphere or simply do not care to think as much as they would while doing software and hardware design for the system. This results in a clunky and sometimes dangerous user interface. Here are some examples of poor UI designs and their effects.

So, the next time when you do a UI design, please have some consideration for the poor users and let them have an easy life! 😉

Component Problems with Electronic Systems

In Education, Embedded Systems, Engineering Principles on December 30, 2014 at 9:37 PM

It is not surprising to find component problems with electronic systems. I was working with a Zedboard recently and it would just not boot from the supplied SD card. The serial driver was properly installed but the LED would not light up. The host PC’s operating system did not complain about any driver issues. Some members on the Zedboard forum complained about the micro-USB socket problem on the board. In any case, when working with a development or an evaluation board, it can become difficult to diagnose such issues. I tried different SD cards as well but to no use.  My laptop can recognize the SD card but Windows is unable to format it!

This experience makes me feel that it is relatively easier to simulate a design and test it for functional correctness. It is more frustrating when components on a board stop working and you do not know which one. For my case, the SD card could be corrupt, the SD card reader could be corrupt; according to forums, there may be issues with the serial port driver etc. It is not that it is difficult to diagnose the issue. It is just that you have try to isolate the problem by looking at different possible issues one by one. It wastes a lot of time especially when you expect a dev/eval board to be up and running quickly.

One board can take away so much time. Imagine if you have to do this for 20 such boards which is usually the case when such boards are procured for student laboratory exercises! Can’t there be a better way to know the status of components? Perhaps it is time to investigate this!

When Engineering Meets Art

In Embedded Systems, Engineering Principles on November 30, 2014 at 2:33 PM

Do you remember watching any spectacular acrobatics show? Imagine watching not only that but a lot more. This is exactly what you get at the House of Dancing Water show at the City of Dreams in Macau. The show presents a dance drama which not only travels in time but is also supported by stunning engineering work. In fact, I think without the engineering work, the dance drama would lose its effect by half. Seated in a 360 degree theater, the drama opens on a stage of water with actors swimming and coming out! Yes, it is water! Shipping platforms rise, incredible audio and light equipments create thunderstorms and lightning and the actors perform on these platforms. They tell a story and they also jump into the water on stage from the platforms that rise many meters in height. Water cannons shower water in all shapes which are used incredibly in the story. Small boats come sailing on the stage which itself alternates between a dry platform, no platform (only water), partially dry and partially filled with water. The water cannons shoot from holes in these platforms and the water sprinklers on the part of the roof covering the stage create rain. There is an incredibly complex mechanical control near the roof top spanning the stage area to slide actors onto the stage, pull them away, make them fly etc. Aside from the brilliant performance, you can only wonder at the amount of engineering ingenuity that has gone into making all this possible.

House of Dancing Water

House of Dancing Water

For one-tenth of the cost of a ticket to House of Dancing Water, one can also watch another show called “Dragon’s Treasure“. You can imagine yourself standing on a stage that cuts right through the center of a sphere. The top half of the sphere is the screen on which a story related to dragons is played with brilliant sound and light co-ordination. You need to turn your head all around to follow the movements of dragons on the gigantic screen.

Dragon's Treasure Show

Dragon’s Treasure Show

This immersive experience is enhanced further by the dragons spitting fire through the holes that you see on the screen in the image above. Of course these dragons are not real but the fire is!

These shows confirm that it is possible for art to achieve greater heights using engineering to its advantage. While engineering plays a secondary role compared to the story and the imagination of writer, it does enhance the overall effect! The audience walks away with a sense of having spent beautiful time and money worth spent.

 

Learning Through Examples

In Education, Embedded Systems on October 24, 2014 at 6:08 PM

I am a big supporter of the “learning through examples” paradigm. Not only it makes the concept clearer, it also leaves an impression in a learner’s mind about the method and the tools used. Over the past couple of months, I have been preparing course slides for an undergraduate course in reconfigurable computing. I have also been preparing laboratory exercises for the students enrolled in the course. I have found it a lot easier to explain important concepts and tool flows using examples. Students have found it to be better than slides which have very few examples or are very abstract (leaving the instructor to fill in a lot of details orally during the lecture).

A good friend of mine, Adam Taylor, has been writing a series of blog for Xilinx’s Xcell publication. The blogs have focused on using the Zynq platform from Xilinx. Zynq programmable SoC combined the strengths of an ARM processor with programmable logic. In fact it has two ARM processors coupled with programmable FPGA fabric. His blog has covered in detail how to use the MicroZed board which features a Zynq SoC. Complete with screen-shots and step by step instruction, those articles will be useful to anyone interested in trying out this new kind of FPGA. Those articles are now also available in a single PDF document for easy reference. The document can be downloaded here.

Embedded system design using both a FPGA and a processor is a complex exercise and any tutorial that makes the concepts and the tool flow easier to understand is always helpful for engineers.

What is the purpose of a lab?

In Education, Embedded Systems on July 22, 2014 at 9:22 PM

Laboratory sessions at universities form an integral part of curriculum. This is specially the case with science and engineering disciplines. While different disciplines have different requirements regarding what will actually be done in these sessions, a basic question to ask is – what is their purpose? I will discuss with respect to labs for computer engineering curriculum. These lab sessions are meant to give hands on experience to students in working with devices like micro-controllers, microprocessors, field programmable gate arrays (FPGA) etc. Often times, students are given codes (programs in a programming language) written by a teaching assistant (TA) which they are expected to use to program the device. They are expected to program the device using some Integrated Development Environment (IDE). The students may be required to modify these programs based on the lab exercises.

Among other things to learn, I have realized that there is too much emphasis on learning how to use the IDEs. This is not peculiar to one country or university. It seems to be the norm at many places if you look at the lab descriptions available online. It is true that different IDEs look dissimilar (obviously!) and the options that they provide to a user can be in different parts of the graphical user interface (GUI) and under different menus. However, they all follow a basic flow which is essential and relevant to the system or device that they target. Good IDEs are similar in layout and are easy to navigate. Therefore, it should be easier for students to move from one IDE to another after they have learned at least one properly. Besides, it is not so much the IDEs themselves but the different steps in the flow which are more essential to learn. After all, IDEs package different steps, necessary to program such systems and devices, into one nice coherent click-and-run flow.

I believe that lab sessions are meant to complement lecture based learning. How the different steps , algorithms, methods etc. taught in a class come together in a coherent manner in order to enable the programming of such systems, is an important learning outcome. Besides, when working with development boards and evaluation kits, students can learn to navigate through user guides, reference designs, schematics, bill of materials (BOM) files etc. These will seldom be taught in class room, but they form a very important part of an engineer’s life in industry. Lab sessions provide an opportunity for students to relate and expand their class room based learning to what actually goes into designing, building and testing real world systems. I think that should be one of the most important guiding factor for faculty members when designing lab sessions.

The Unlikely Places for Electronics Hardware Work

In Embedded Systems, Science & Technology Promotion and Public Policy on June 28, 2014 at 11:27 PM

The world is always changing and big data is changing it in even newer ways. Till a few years ago, no one would have thought that data crunching companies and software companies would get involved in electronics hardware design work. However, that is the case today. Microsoft is building programmable chips and hardware to speed up its Bing search engine (see here  and here). Amazon just released its own smartphone (see here). Companies like Google and Facebook which would typically use custom off the shelf hardware to build their datacenters are now getting involved with real hardware design in order to make their datacenter more power efficient and increase their performance (see here and here). If one were to look at the career openings in these companies, one can find openings for people with electronics or computer hardware design.

On the other hand, if one were to look at companies like IBM, Cisco, Oracle etc. the number of openings in these areas are comparable to those at Google etc. It is no surprise that some industry watchers have begun to wonder if IBM is trying to become Google and Google trying to become IBM. There was a time when IBM did tremendous amount of computer hardware related work, but that is not the case today. A lot of its activities involve work with software.

While companies like Marvell, ST Microelectronics, Infineon etc. continue to work in the hardware domain and supply parts to different players in the electronics ecosystem, companies like Amazon etc. have emerged as the dark horses in this space. They may not be as diverse as Infineon etc. but they are very focused on what they want to do and what they want to offer. Their direction of work is very customer oriented and involves product design which many people like to get involved with.

When Facebook asked Buddha to be Tagged

In Education, Embedded Systems on May 25, 2014 at 8:26 PM

Facebook has this face recognition feature. It automatically recognizes faces in uploaded pics and then provided the option to tag the faces that its software has managed to recognize in those images. It is very successful most of the time. Face recognition is a very active field of research with many groups working on it. It is now part of security systems, the simplest example being laptops that have facial recognition feature for unlocking.

 Now, take a look at the image below. It shows two statues of Buddha at the 10,000 Buddha temple in Hong Kong.

Buddha Statues at 10,000 Buddha Temple, Hong Kong

 

When it was uploaded to FB, the software identified the two faces in this photograph and asked for tags! It was quite a surprise for me and it made me realize a limitation with existing facial recognition technology. Existing technology cannot differentiate between real human faces or faces which are part of a “non-human” element. This is one reason why facial recognition technology is known to be fooled using images. One can login to a system by showing an image of the person with authorized access. This was discussed by USA Today here. This is also the reason why high security establishments may go for multi-modal authorization in which facial recognition is just one part.

I guess for facial recognition technology to be truly a single point solution for authorization, it will have to learn to distinguish between human and non-human elements. The road ahead has a lot of interesting challenges!