Posts Tagged ‘Embedded Systems’

User Interface (UI) Design for Computer Systems

In Design Methodologies, Embedded Systems, Engineering Principles on January 13, 2016 at 8:03 PM

I believe that a proper User Interface (UI) design for computer systems is a must. All the technical, scientific and engineering wizardry that engineers may do while writing code and developing the system comes to a naught if the user interface is not human centric. There are countless examples of poor UI designs and one can find it at even those places which excel in research and development. Will it not be surprising if you happen to visit a renowned research lab or university where it takes time for a user to figure out how to use a machine to update some data on a card? It can be a bewildering experience.

When you go to an ATM machine to withdraw money, you are actually interacting with the machine through a user interface (UI). You insert your card, provide security details and choose options from the on-screen menu. This is all fine as long as you understand the languages used by the machine. These and similar other machines like queue number dispensers, ticket vending machines etc. are often used these days.

Among other things, I consider the choice of language as the most important decision that a user should be allowed to make before he provides other inputs to the machine for processing. If the user does not understand the current language and it takes a while to figure out how to set the language, it leaves the user with a bad experience.

The very-first view on the screen of such a machine should be related to the selection of a language. Now, the message there could be “Choose a language and a list of language is also shown simultaneously”. Of course, this assumes that  the user would understand the message “Choose a language” written in one of the supported languages. But I think a better option is to simply show all the supported languages without any message.  The user can then simply select one and thereafter the usual process follows. Such a design would work best with ATM machines, ticket vending machines etc. These are machines with which a user interacts instead of simply relying on it for information. For instance, the speedometer display of your car just provides you with information; you do not interact with it. For such interfaces, other UI designs will be suitable.

The problem with UI design in many systems is that it is done by engineers and managed by managers who have little training in this sphere or simply do not care to think as much as they would while doing software and hardware design for the system. This results in a clunky and sometimes dangerous user interface. Here are some examples of poor UI designs and their effects.

So, the next time when you do a UI design, please have some consideration for the poor users and let them have an easy life! 😉

Component Problems with Electronic Systems

In Education, Embedded Systems, Engineering Principles on December 30, 2014 at 9:37 PM

It is not surprising to find component problems with electronic systems. I was working with a Zedboard recently and it would just not boot from the supplied SD card. The serial driver was properly installed but the LED would not light up. The host PC’s operating system did not complain about any driver issues. Some members on the Zedboard forum complained about the micro-USB socket problem on the board. In any case, when working with a development or an evaluation board, it can become difficult to diagnose such issues. I tried different SD cards as well but to no use.  My laptop can recognize the SD card but Windows is unable to format it!

This experience makes me feel that it is relatively easier to simulate a design and test it for functional correctness. It is more frustrating when components on a board stop working and you do not know which one. For my case, the SD card could be corrupt, the SD card reader could be corrupt; according to forums, there may be issues with the serial port driver etc. It is not that it is difficult to diagnose the issue. It is just that you have try to isolate the problem by looking at different possible issues one by one. It wastes a lot of time especially when you expect a dev/eval board to be up and running quickly.

One board can take away so much time. Imagine if you have to do this for 20 such boards which is usually the case when such boards are procured for student laboratory exercises! Can’t there be a better way to know the status of components? Perhaps it is time to investigate this!

Learning Through Examples

In Education, Embedded Systems on October 24, 2014 at 6:08 PM

I am a big supporter of the “learning through examples” paradigm. Not only it makes the concept clearer, it also leaves an impression in a learner’s mind about the method and the tools used. Over the past couple of months, I have been preparing course slides for an undergraduate course in reconfigurable computing. I have also been preparing laboratory exercises for the students enrolled in the course. I have found it a lot easier to explain important concepts and tool flows using examples. Students have found it to be better than slides which have very few examples or are very abstract (leaving the instructor to fill in a lot of details orally during the lecture).

A good friend of mine, Adam Taylor, has been writing a series of blog for Xilinx’s Xcell publication. The blogs have focused on using the Zynq platform from Xilinx. Zynq programmable SoC combined the strengths of an ARM processor with programmable logic. In fact it has two ARM processors coupled with programmable FPGA fabric. His blog has covered in detail how to use the MicroZed board which features a Zynq SoC. Complete with screen-shots and step by step instruction, those articles will be useful to anyone interested in trying out this new kind of FPGA. Those articles are now also available in a single PDF document for easy reference. The document can be downloaded here.

Embedded system design using both a FPGA and a processor is a complex exercise and any tutorial that makes the concepts and the tool flow easier to understand is always helpful for engineers.

What is the purpose of a lab?

In Education, Embedded Systems on July 22, 2014 at 9:22 PM

Laboratory sessions at universities form an integral part of curriculum. This is specially the case with science and engineering disciplines. While different disciplines have different requirements regarding what will actually be done in these sessions, a basic question to ask is – what is their purpose? I will discuss with respect to labs for computer engineering curriculum. These lab sessions are meant to give hands on experience to students in working with devices like micro-controllers, microprocessors, field programmable gate arrays (FPGA) etc. Often times, students are given codes (programs in a programming language) written by a teaching assistant (TA) which they are expected to use to program the device. They are expected to program the device using some Integrated Development Environment (IDE). The students may be required to modify these programs based on the lab exercises.

Among other things to learn, I have realized that there is too much emphasis on learning how to use the IDEs. This is not peculiar to one country or university. It seems to be the norm at many places if you look at the lab descriptions available online. It is true that different IDEs look dissimilar (obviously!) and the options that they provide to a user can be in different parts of the graphical user interface (GUI) and under different menus. However, they all follow a basic flow which is essential and relevant to the system or device that they target. Good IDEs are similar in layout and are easy to navigate. Therefore, it should be easier for students to move from one IDE to another after they have learned at least one properly. Besides, it is not so much the IDEs themselves but the different steps in the flow which are more essential to learn. After all, IDEs package different steps, necessary to program such systems and devices, into one nice coherent click-and-run flow.

I believe that lab sessions are meant to complement lecture based learning. How the different steps , algorithms, methods etc. taught in a class come together in a coherent manner in order to enable the programming of such systems, is an important learning outcome. Besides, when working with development boards and evaluation kits, students can learn to navigate through user guides, reference designs, schematics, bill of materials (BOM) files etc. These will seldom be taught in class room, but they form a very important part of an engineer’s life in industry. Lab sessions provide an opportunity for students to relate and expand their class room based learning to what actually goes into designing, building and testing real world systems. I think that should be one of the most important guiding factor for faculty members when designing lab sessions.

Communication Skills for User Interaction

In Design Methodologies, Engineering Principles, Research and Development on April 12, 2014 at 9:06 PM

I recently used the IVR (Interactive Voice Response) system of an organization tasked with issuing identity cards to citizens. An IVR system is supposed to improve customer experience besides helping the organization in managing complaints,requests etc. Therefore,it plays a very important role. An IVR system comprises one or multiple menus which are read out to a caller who then has to select one of the options. Interestingly, sometimes there are just so many options that one just loses tracks. It also happens when the “menu items” do not sound similar to what the called user has in mind. So what do you do? You just navigate to the one that sounds closest  to what you had in mind and hope that it will solve your problem or you wait for the option to talk to a staff on the other side!

The IVR system that I referred to earlier had peculiar issues. If you selected the option that said something similar to “I would like to know if I need to reapply”, you would expect it to prompt you to give some information based on which you would be told “whether or not” you should reapply. However, this IVR system would give the response similar to “Please do not reapply as it is not desirable to have two identity numbers”. Now how on earth is that helpful?

The IVR system of a prominent smartphone company would give some even more hilarious responses. When you call the number hoping to find a relevant menu or speak to someone, it would tell you something similar to “Please visit our website to resolve your issue”. Now imagine that for some reason you do not have access to internet, then is that response of any help? Absolutely not.

This begs the question about the people (engineers, manager, UI guys etc.) involved in designing IVR systems. Do they really understand how people use a language to communicate? Do they spend some time understanding the common phrases that people use to refer to their issues and then distill a subset that they can use in their system? Do they spend time brainstorming proper responses to different kinds of questions? A good IVR system is not just a software development exercise. It involves understanding about communication and is affected by the communication skills of the team doing the design. Similarly, an IVR system with multiple menus and sub-menus can get difficult to navigate especially for old people. Does the design team understand who the end users are and what kind of communication skills they have? I think these are important questions that should be considered. An IVR system is supposed to provide an easy solution to a user. It should be simple, straight and elegant.

A Tale of Two Samsung Galaxy S4s

In Design Methodologies, Education, Embedded Systems, Engineering Principles on May 14, 2013 at 7:23 PM

When you are in school or college, you are taught about the best ways to do things. It is generally about a point solution. Alternatives are rarely discussed in detail. One almost always looks for the best answer, the best method, the best algorithm. When you begin to work  for a company, you almost always realize that the best solution is not what one is always looking for. Time and market pressures play a role in choosing solutions. You can choose a solution that suits the “taste of the target market“. When you serve more than one market, then it becomes interesting. Would you want to choose two different solutions for two different markets for the same product? This is one of the reasons that analysts cite regarding what Samsung has done with its Galaxy S4 smart phone. While the US and the Korean versions appear identical on the outside, they use quite a number of different components. Their processors, wireless and image processing architectures are different. Supposedly, the Korean version is faster and has a longer battery life because it uses  Samsung’s Octacore Exynos 5 processor which has an architecture (read here) that helps to attain a balance of power efficient and performance more than the Qualcomm Snapdragon processor in the US version. iSuppli’s IHS Teardown Service reveals all the component level differences between the two designs here.

A more plausible reason for the difference in the two architectures is the fact that the LTE bands supported by mobile operators in US and Korea are different (see here). The two processors (essentially system on chips in this case) may not support both the LTE bands. However, it does illustrate an important point related to engineering product design. It shows that you can design the same product with different architectures. While not related to S4, this analysis reminds me of regulations in certain countries which make it compulsory for a manufacturer to source components from local suppliers for products to be sold in the local market.  An example is here. Therefore, as a manufacturer you can end up with different components in different markets for the same product.

I used to think that a consumer electronic item sold in different countries used the same components. That myth now stands broken! While you can easily spot the differences in software, prominent being the language used in user interface, it is not easy to spot differences in hardware.

Role of Industrial Consortia in Education and Research

In Education, Embedded Systems, Industrial Consortia, Research and Development on February 8, 2013 at 6:58 PM

A Google search will reveal the existence of quite a few influential industrial consortia further the cause of research and education in fields identified by them. Almost all of them are run jointly by people from industry and prominent educational and research institutions. You can find a list of them compiled here. I have listed only the ones relevant to electronics and computer industries. I have found that not many students are aware of these consortia and that should not be the case. Some of these are highly active and they contribute a lot to research, development of technology and education. Consortia like Accellera Systems Intiative have contributed to a number of IEEE standards. Some of these can be downloaded for free from its website. The Semiconductor Research Association plays an important role in promoting research and education in the field of semiconductors. The International Technology Roadmap for Semiconductors has played an immense role in identifying challenges before the semiconductor industry- from design to manufacturing, to testing and validation. Many of these associations also offer scholarships and fellowships for students and research grants for faculty members. Their publications provide a lot of insight regarding the challenges at present and of the future. These publications may not always have a lot of in depth research material, the sort of which most graduate students are accustomed to, but they successfully paint the bigger picture. Paying attention to such facts can help in keeping research relevant to industry where necessary. Besides, it also helps in learning about the actual real world problems and the challenges involved in translating research into technology that can be scaled up and widely used. Sometimes, problems are considered solved in academic research but such solutions never make it to the market, even if of relevance, because their translation to scalable technology still remains an open problem.

Numerical Stability in Calculations

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on January 24, 2013 at 11:44 PM

I did not have any course on algorithms in my undergraduate education. I studied about them (their properties, design etc.) during my research work. I now realize why their study is important for anyone who wants to be really good at designing algorithms or implementing them. After all, algorithms solve problems. I recently came across the subject of numerical stability of algorithms, numerical algorithms to be precise. While algorithms help solve problems, they need to be implemented on a digital machine (a computer for example) which has limited precision. Whatever number system we use, they cannot cover all the numbers present in exact  mathematics. This leads to approximations as well as upper and lower bounds on the numbers that can be represented. Also, approximations can be the source of errors and deviations from the exact numerical answer. For instance, on a machine with only 3 digit precision, numbers like 22, 2.22, 0.110, 100, 177 can be represented. Now if you try to add 2 and 1000 instances of 0.11 , your sum would be 112 on this machine and this matches with the exact answer. Similarly, if you try to add 9 and 9 instances of 0.11, the answer on this machine would be 9.99, which matches with the exact answer. However, if you try to add 10 and 9 instances of 0.11 in that order i.e 10+0.11+0.11…., the machine would return 10 as answer because the moment you try to add 0.11 to 10, you are going to exceed the precision of the machine. Now imagine doing the same calculation in the reverse order i.e adding all the nine 0.11’s first and then 10 i.e. 0.11+0.11+….+10, the machine would return an answer of 0.99 which is far off from the actual answer 10.99 and far worse than the previous approximation of 10 (for the other order of addition). This means that the way you arrange your numbers( in memory, for instance an array) to be added also may influence the sum!! I wish that embedded systems engineers read more on this subject so that the numerical errors that we see cropping up in such systems get reduced. A nice introduction is at wikipedia.

Velocity, Displacement & Acceleration: Science vs. Engineering

In Design Methodologies, Education, Engineering Principles on December 24, 2012 at 7:04 PM

One often encounters the question: What is the difference between science and engineering? An oft quoted answer is that engineering involves, roughly speaking, an application of science or scientific results borne out of investigation into the nature of matter and its interaction with its surroundings. Science is about acquiring more knowledge and understanding about existing phenomena whereas engineering involves solving problems by applying that knowledge. Therefore, many also hold the view that it is applied science. Well, I won’t get into the debate of engineering vs. science or put before you an essay on this topic in this post. I would just like to highlight an example of where engineering takes over from science. Every student studies the concepts of velocity, acceleration and displacement in elementary Physics classes. These concepts are very simple: velocity is the derivative of displacement with respect to time while acceleration is the derivative of velocity with respect to time. Therefore to get displacement from velocity , one needs to integrate the former with respect to time over a given time period. Similarly, velocity at a certain point in time is the result of integration of acceleration over a given time interval. Now, if one is asked to apply these principles to calculate velocity and displacement using the acceleration data obtained from a transducer mounted on an engine, how would one do it? In this case, the engine vibrates and there is no physical noticeable movement of engine body from one place to another in the traditional sense (like a ball traveling from place A to place B in a field). This is where engineering comes in. An engine is a complex system and its vibrations need not be linear or constant in time. There can be vibrations with low frequencies as well as high frequencies and there can be periods of no vibration at all. In these cases, calculation of displacement or velocity is not straight forward and requires greater insight into the mechanism of vibration as well as the nature of acceleration signal. I would recommend reading up 1, 2 and 3 to get an idea of how interesting and insightful it can become! These are links to articles by Prosig  which works in the area of noise and vibration analysis. Understanding these mechanisms is important for any embedded designer who writes code to measure such parameters using microcontrollers etc.

The Internet of Things

In Embedded Systems, Engineering Principles, Interdisciplinary Science on November 29, 2012 at 2:43 PM

When I first attended a presentation on “The Internet of Things”, I was not very excited. It turned out to be nothing more than a glorified description of sensor networks. Though this phrase was first used in 1999 as reported in an article in RFID journal, it has been interpreted in many different ways by different people. Trying to find a way through that maze of descriptions is really difficult. However, after reading a lot about it and based on my own understanding of embedded systems, sensor networks and systems engineering, I would like to share what it means for a non-technical audience. I find it best to explain through examples. Take the case of a smart home. You can control the appliances in your home while driving your car as there is a communication network that links you up with them while you are driving. Your smartphone connects you to the internet where you can shop, play games together with your friends and download apps that make your phone more versatile. It syncs with your email accounts and any sync enabled application, helps you make payments on the go (mobile banking), provide access to your data anywhere through cloud based tools like dropbox etc.. The GPS on your phone helps you find your way in a city by showing you on a city map that has been downloaded on to your phone using a wi-fi or similar data connection. You can drive almost safely even in a city new for you! These examples demonstrate an interaction between humans, electronic devices which may have sensors, mechanical devices and the traditional internet. By traditional internet I mean the internet which was seen initially  as just a repository of information and which has now grown to include processing engines like  those which facilitate “voice enabled search and SMS” on your smartphones, storage and compute space for cloud applications (like Amazon’s EC service) etc. Thus the “Internet of Things” is nothing but a network where human actions, electrical and mechanical devices and the internet come together to interact in a meaningful way. The scope of this interaction can be as varied and wide as possible depending on the intended result.