sharadsinha

Posts Tagged ‘Electronic System Design’

DST AWSAR Award 2021 and VLSID 2022

In Education, Embedded Systems, Research and Development on March 11, 2022 at 1:27 AM

The 35th International Conference on VLSI Design and the 31st International Conference on Embedded Systems were held 26 Feb to 2 March 2022. Both VLSI design and embedded systems are two of the areas in which my group carries out research. Encouraging students to participate in these conferences is one way of helping them align with their field more closely. Prachi Kashikar, the 1st PhD student in my group, received the VLSID 2022 Fellowship. This year, I also served as a TPC member in the Embedded Systems track.

Another PhD student, Pavitra Bhade, is one of the winners of the DST AWSAR Award 2021 for the Best Story in PhD category. The Department of Science and Technology (DST), Govt. of India sponsored AWSAR award aims to promote story telling approaches to describe research. AWSAR stands for Augmenting Writing Skills for Articulating Research. The goal is to make research work accessible to non-experts. Pavitra wrote a story on her research about detection of cache side channel attacks by using a children’s playground as an example.

User Interface (UI) Design for Computer Systems

In Design Methodologies, Embedded Systems, Engineering Principles on January 13, 2016 at 8:03 PM

I believe that a proper User Interface (UI) design for computer systems is a must. All the technical, scientific and engineering wizardry that engineers may do while writing code and developing the system comes to a naught if the user interface is not human centric. There are countless examples of poor UI designs and one can find it at even those places which excel in research and development. Will it not be surprising if you happen to visit a renowned research lab or university where it takes time for a user to figure out how to use a machine to update some data on a card? It can be a bewildering experience.

When you go to an ATM machine to withdraw money, you are actually interacting with the machine through a user interface (UI). You insert your card, provide security details and choose options from the on-screen menu. This is all fine as long as you understand the languages used by the machine. These and similar other machines like queue number dispensers, ticket vending machines etc. are often used these days.

Among other things, I consider the choice of language as the most important decision that a user should be allowed to make before he provides other inputs to the machine for processing. If the user does not understand the current language and it takes a while to figure out how to set the language, it leaves the user with a bad experience.

The very-first view on the screen of such a machine should be related to the selection of a language. Now, the message there could be “Choose a language and a list of language is also shown simultaneously”. Of course, this assumes that  the user would understand the message “Choose a language” written in one of the supported languages. But I think a better option is to simply show all the supported languages without any message.  The user can then simply select one and thereafter the usual process follows. Such a design would work best with ATM machines, ticket vending machines etc. These are machines with which a user interacts instead of simply relying on it for information. For instance, the speedometer display of your car just provides you with information; you do not interact with it. For such interfaces, other UI designs will be suitable.

The problem with UI design in many systems is that it is done by engineers and managed by managers who have little training in this sphere or simply do not care to think as much as they would while doing software and hardware design for the system. This results in a clunky and sometimes dangerous user interface. Here are some examples of poor UI designs and their effects.

So, the next time when you do a UI design, please have some consideration for the poor users and let them have an easy life! 😉

Component Problems with Electronic Systems

In Education, Embedded Systems, Engineering Principles on December 30, 2014 at 9:37 PM

It is not surprising to find component problems with electronic systems. I was working with a Zedboard recently and it would just not boot from the supplied SD card. The serial driver was properly installed but the LED would not light up. The host PC’s operating system did not complain about any driver issues. Some members on the Zedboard forum complained about the micro-USB socket problem on the board. In any case, when working with a development or an evaluation board, it can become difficult to diagnose such issues. I tried different SD cards as well but to no use.  My laptop can recognize the SD card but Windows is unable to format it!

This experience makes me feel that it is relatively easier to simulate a design and test it for functional correctness. It is more frustrating when components on a board stop working and you do not know which one. For my case, the SD card could be corrupt, the SD card reader could be corrupt; according to forums, there may be issues with the serial port driver etc. It is not that it is difficult to diagnose the issue. It is just that you have try to isolate the problem by looking at different possible issues one by one. It wastes a lot of time especially when you expect a dev/eval board to be up and running quickly.

One board can take away so much time. Imagine if you have to do this for 20 such boards which is usually the case when such boards are procured for student laboratory exercises! Can’t there be a better way to know the status of components? Perhaps it is time to investigate this!

What is the purpose of a lab?

In Education, Embedded Systems on July 22, 2014 at 9:22 PM

Laboratory sessions at universities form an integral part of curriculum. This is specially the case with science and engineering disciplines. While different disciplines have different requirements regarding what will actually be done in these sessions, a basic question to ask is – what is their purpose? I will discuss with respect to labs for computer engineering curriculum. These lab sessions are meant to give hands on experience to students in working with devices like micro-controllers, microprocessors, field programmable gate arrays (FPGA) etc. Often times, students are given codes (programs in a programming language) written by a teaching assistant (TA) which they are expected to use to program the device. They are expected to program the device using some Integrated Development Environment (IDE). The students may be required to modify these programs based on the lab exercises.

Among other things to learn, I have realized that there is too much emphasis on learning how to use the IDEs. This is not peculiar to one country or university. It seems to be the norm at many places if you look at the lab descriptions available online. It is true that different IDEs look dissimilar (obviously!) and the options that they provide to a user can be in different parts of the graphical user interface (GUI) and under different menus. However, they all follow a basic flow which is essential and relevant to the system or device that they target. Good IDEs are similar in layout and are easy to navigate. Therefore, it should be easier for students to move from one IDE to another after they have learned at least one properly. Besides, it is not so much the IDEs themselves but the different steps in the flow which are more essential to learn. After all, IDEs package different steps, necessary to program such systems and devices, into one nice coherent click-and-run flow.

I believe that lab sessions are meant to complement lecture based learning. How the different steps , algorithms, methods etc. taught in a class come together in a coherent manner in order to enable the programming of such systems, is an important learning outcome. Besides, when working with development boards and evaluation kits, students can learn to navigate through user guides, reference designs, schematics, bill of materials (BOM) files etc. These will seldom be taught in class room, but they form a very important part of an engineer’s life in industry. Lab sessions provide an opportunity for students to relate and expand their class room based learning to what actually goes into designing, building and testing real world systems. I think that should be one of the most important guiding factor for faculty members when designing lab sessions.

The Unlikely Places for Electronics Hardware Work

In Embedded Systems, Science & Technology Promotion and Public Policy on June 28, 2014 at 11:27 PM

The world is always changing and big data is changing it in even newer ways. Till a few years ago, no one would have thought that data crunching companies and software companies would get involved in electronics hardware design work. However, that is the case today. Microsoft is building programmable chips and hardware to speed up its Bing search engine (see here  and here). Amazon just released its own smartphone (see here). Companies like Google and Facebook which would typically use custom off the shelf hardware to build their datacenters are now getting involved with real hardware design in order to make their datacenter more power efficient and increase their performance (see here and here). If one were to look at the career openings in these companies, one can find openings for people with electronics or computer hardware design.

On the other hand, if one were to look at companies like IBM, Cisco, Oracle etc. the number of openings in these areas are comparable to those at Google etc. It is no surprise that some industry watchers have begun to wonder if IBM is trying to become Google and Google trying to become IBM. There was a time when IBM did tremendous amount of computer hardware related work, but that is not the case today. A lot of its activities involve work with software.

While companies like Marvell, ST Microelectronics, Infineon etc. continue to work in the hardware domain and supply parts to different players in the electronics ecosystem, companies like Amazon etc. have emerged as the dark horses in this space. They may not be as diverse as Infineon etc. but they are very focused on what they want to do and what they want to offer. Their direction of work is very customer oriented and involves product design which many people like to get involved with.

Do you read User Guides?

In Design Methodologies, Education, Embedded Systems, Engineering Principles, Research and Development on May 14, 2014 at 6:32 PM

I am a member of LinkedIn and like many of you am also a member of quite a few LinkedIn groups. The good thing about LinkedIn groups is that the discussions remain professional in tone and content. This is why I like them compared to discussions on other social media platforms where they can vary in tone and content from the most professional to the most ridiculous. In a discussion on such a LinkedIn forum meant for engineers, someone admitted that very few engineers or users of tech tools read the user guides. This is not far from reality. I have seen this when I interacted with practicing engineers on a more regular basis than now. I also see it in academic life.

Personally, I find user guides of development boards, software and hardware tools extremely useful. Reading them once gives me enough confidence in extracting the best out of these tools. For instance, user guides of FPGA vendor providers are very helpful and I am more confident about my design after having referred to the user guide at least once though often these guides can be voluminous. I guess the verbosity of these guides is one main reason why people don’t feel like reading them. The other reason, I think, is the propensity of many practicing engineers, graduate students and others to get  their hands dirty as soon as possible. They want to write code, design a circuit, run simulations etc. without getting bored reading these guides. While this enthusiasm to start working is worth appreciation, ignoring the “reading” part leads to problems later on in the product development process, research methods and has the potential to creep into the results. Basically, this haste leaves one vulnerable to questioning at a later stage. Sometimes this can prove very costly as well especially if it is related to product development. Of course one can always talk about pressure for results from managers, supervisors, customers etc.; this is not a very good excuse. Good managers etc. also understand the importance of being abreast with background information.

Is this issue observed more in the engineering industry than say banking or insurance sectors or for that matter safety critical engineering domains? Perhaps. Engineers take great pride in fixing things. They can use patches for software, make new releases, change components or simply replace the product.  However, bankers and insurers cannot do much once money is gone. The fear of losing money is too great to sustain the dislike for reading guides, whitepapers etc. Similarly those involved with safety critical engineering domains are more mindful about liability issues that aversion to poring over thick user guides is probably a non-issue.

One can also argue that  the presentation style of many user guides is quite boring. I agree when you compare with things that provide “instant thrill” thus leading to a desire to know more. User guides do not provide that thrill but writing code, experimenting with a development board etc. does give a lot of thrill to many engineers. Nevertheless, when it comes to getting a job done properly, there is no other choice but to sweat it out! 🙂

A Tale of Two Samsung Galaxy S4s

In Design Methodologies, Education, Embedded Systems, Engineering Principles on May 14, 2013 at 7:23 PM

When you are in school or college, you are taught about the best ways to do things. It is generally about a point solution. Alternatives are rarely discussed in detail. One almost always looks for the best answer, the best method, the best algorithm. When you begin to work  for a company, you almost always realize that the best solution is not what one is always looking for. Time and market pressures play a role in choosing solutions. You can choose a solution that suits the “taste of the target market“. When you serve more than one market, then it becomes interesting. Would you want to choose two different solutions for two different markets for the same product? This is one of the reasons that analysts cite regarding what Samsung has done with its Galaxy S4 smart phone. While the US and the Korean versions appear identical on the outside, they use quite a number of different components. Their processors, wireless and image processing architectures are different. Supposedly, the Korean version is faster and has a longer battery life because it uses  Samsung’s Octacore Exynos 5 processor which has an architecture (read here) that helps to attain a balance of power efficient and performance more than the Qualcomm Snapdragon processor in the US version. iSuppli’s IHS Teardown Service reveals all the component level differences between the two designs here.

A more plausible reason for the difference in the two architectures is the fact that the LTE bands supported by mobile operators in US and Korea are different (see here). The two processors (essentially system on chips in this case) may not support both the LTE bands. However, it does illustrate an important point related to engineering product design. It shows that you can design the same product with different architectures. While not related to S4, this analysis reminds me of regulations in certain countries which make it compulsory for a manufacturer to source components from local suppliers for products to be sold in the local market.  An example is here. Therefore, as a manufacturer you can end up with different components in different markets for the same product.

I used to think that a consumer electronic item sold in different countries used the same components. That myth now stands broken! While you can easily spot the differences in software, prominent being the language used in user interface, it is not easy to spot differences in hardware.

What is optimization?

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on April 15, 2013 at 12:04 AM

Perhaps optimization is the most abused word in all of research, engineering work or any task that seeks to maximize some intent. The Merriam-Webster dictionary defines it as “an act, process, or methodology of making something (as a design, system, or decision) as fully perfect, functional, or effective as possible; specifically : the mathematical procedures (as finding the maximum of a function) involved in this”. We can hear about optimizing power, area, a  performance metric like latency etc. Many people pass of every design decision as an optimization strategy. While such decisions may contribute to local optimization, they may fail in achieving global optimization. In fact such optimizations may actually degrade performance when the software or the design is used in a context which was not anticipated or thought of by the original developers. Read here about some insight into optimizing a memory allocator in C++.  You will find another debatable example of optimization to make software run faster here. And here is a nice article on efficiency versus intent. Typically optimization is associated with increasing the efficiency of a design (hardware or software) in some aspect. But such optimizations should not destroy the intent of the design. This requires a bit more analysis on part of the designer/developer to ensure that the intent is not lost. Here is another example.

The field of mathematical optimization, which is related to selecting the most appropriate choice (that satisfies a given set of criteria) from a set of alternatives, is vast and varied. There are numerous techniques suitable for different kinds of problems. You can see the list here. Frankly, it is a tough job to recommend one of these techniques for non-trivial problems. One needs to understand the nature of the problem in detail to make a just recommendation. Understanding the nature of such problems or modeling such problems in a way which is suitable for any of these techniques to be applicable is a non-trivial task. It requires a lot of theoretical and practical insight.

Relearning addition

In Design Methodologies, Education, Embedded Systems, Mathematics on March 22, 2013 at 7:13 PM

Alvin Toffler in his book “Future Shock” says that  “The illiterate of the 21st century will not be those who cannot read or write; they will be those who cannot learn, unlearn, and relearn“. Taking this quote a little out of context in which Alvin used it, I would say that the process of learning, unlearning and relearning basically embodies the principles of evolution and adaptation. And these are equally applicable to education. Are these emphasized enough in universities and schools? Can they be taught? May be yes, may be no. I will give one simple example here. Every electronics or computer engineer would have done some basic C programming. To add two numbers, A and B, one just needs to use the expression ‘A+B’. Does it always work? Not in the world of computers where one has to deal with overflows and underflows. And there is always a limit to the biggest number that a computer or a computing platform can support.

So, how are we going to add two arbitrarily sized positive integers. Examples of positive integers are 123456, 90913456 etc. I will use positive integers to illustrate ‘learn, unlearn and relearn’. The example can easily be extended to other data types. In C language, the integer data type can only support a maximum value of 2,147,483,647 when adding two numbers. So there is an overflow if sum exceeds this value and addition is not possible if either A or B is bigger than this value. To avoid this, one can use other data types supporting greater number of bits until one hits yet another ceiling. After a point, you hit the final ceiling. If the numbers are really so big, one way to deal with them is to go back to our old school days when we learned to add numbers: 2 digits at a time with a carry propagated. Yes, that is all you need to do! And this does not require in-depth of knowledge of various IEEE methods to represent numbers. It is simple and good old school method. Of course, the old school method may not have a very wide application, but it does help where possible and makes it clear that  the symbol for addition “+” (or the add operator as it is referred to in programming languages) should not make us forget how addition is done. We “learn” to add 2 digits at a time in school, then we learn to use the “+” operator in programming languages. Thereafter we have to unlearn this concept to relearn (or recall) the school method.  I have written a reference implementation in C which you can find here. You can also find its link under the software tools tab here.

Numerical Stability in Calculations

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on January 24, 2013 at 11:44 PM

I did not have any course on algorithms in my undergraduate education. I studied about them (their properties, design etc.) during my research work. I now realize why their study is important for anyone who wants to be really good at designing algorithms or implementing them. After all, algorithms solve problems. I recently came across the subject of numerical stability of algorithms, numerical algorithms to be precise. While algorithms help solve problems, they need to be implemented on a digital machine (a computer for example) which has limited precision. Whatever number system we use, they cannot cover all the numbers present in exact  mathematics. This leads to approximations as well as upper and lower bounds on the numbers that can be represented. Also, approximations can be the source of errors and deviations from the exact numerical answer. For instance, on a machine with only 3 digit precision, numbers like 22, 2.22, 0.110, 100, 177 can be represented. Now if you try to add 2 and 1000 instances of 0.11 , your sum would be 112 on this machine and this matches with the exact answer. Similarly, if you try to add 9 and 9 instances of 0.11, the answer on this machine would be 9.99, which matches with the exact answer. However, if you try to add 10 and 9 instances of 0.11 in that order i.e 10+0.11+0.11…., the machine would return 10 as answer because the moment you try to add 0.11 to 10, you are going to exceed the precision of the machine. Now imagine doing the same calculation in the reverse order i.e adding all the nine 0.11’s first and then 10 i.e. 0.11+0.11+….+10, the machine would return an answer of 0.99 which is far off from the actual answer 10.99 and far worse than the previous approximation of 10 (for the other order of addition). This means that the way you arrange your numbers( in memory, for instance an array) to be added also may influence the sum!! I wish that embedded systems engineers read more on this subject so that the numerical errors that we see cropping up in such systems get reduced. A nice introduction is at wikipedia.