sharadsinha

Posts Tagged ‘Electronics and computer engineering’

The World as a State Machine

In Design Methodologies, Education, Engineering Principles, Mathematics on April 29, 2013 at 9:46 PM

A state machine is basically a model of computation which helps one analyze the effects of input on a system. This system can remain in different states throughout its life cycle though in only one state at a time. It can transition from one state to another depending on some input. Every state machine has a start state and it progresses from there to other states eventually leading to an end state. Note that it is possible to reach the end state from any intermediate state as well as the start state. It depends on the system being modeled. Also, the output of each state may depend on the current state as well the inputs to that state.  Thus state machines model reactive systems i.e. systems which react. A good description of state machines can be found here. Note that the description there is related to finite state machines, which are so called because they have finite number of states. State machines are used in different fields of study not just electrical or computer engineering. They are used in biology, mathematics, linguistics etc. They also have different variants each trying to capture some additional parameters of a system which I would not go into. You can read about them at the link mentioned earlier.

I was wondering if the world can be modeled as a state machineI think that the world in fact is a state machine except that its end state is unknown. Those with absolute faith in cosmological physics would state that the “Big Bang” can be considered as the start  state. Those with religious views might consider something else as the start state.  The beauty of this world being considered as a state machine lies in the fact that it does not matter whether you believe in science or not. It does not matter whether you have more of a religious bent of mind and would like to see the world from a religious or theological perspective or whether you want to see it only from a scientific standpoint. Either way, the world can be modeled as a state machine. You get to choose the start state depending on which viewpoint you are more comfortable with. In both the cases, the world is in fact a reactive system. It can even be considered as an aggregation of interacting state machines where each state machine can represent the economic, social, political, religious and scientific state of the world. And nobody would deny that all these concepts influence each other. Every electrical or computer engineering student studies about Moore and Mealy state machines. To them, the world is probably a Mealy state machine though not strictly so: the outputs in any state that this world resides in is dependent not only on the current inputs but also on the current state. If we look around us, it sounds so true,   does it not? However, this state machine is extremely complex!

Role of Industrial Consortia in Education and Research

In Education, Embedded Systems, Industrial Consortia, Research and Development on February 8, 2013 at 6:58 PM

A Google search will reveal the existence of quite a few influential industrial consortia further the cause of research and education in fields identified by them. Almost all of them are run jointly by people from industry and prominent educational and research institutions. You can find a list of them compiled here. I have listed only the ones relevant to electronics and computer industries. I have found that not many students are aware of these consortia and that should not be the case. Some of these are highly active and they contribute a lot to research, development of technology and education. Consortia like Accellera Systems Intiative have contributed to a number of IEEE standards. Some of these can be downloaded for free from its website. The Semiconductor Research Association plays an important role in promoting research and education in the field of semiconductors. The International Technology Roadmap for Semiconductors has played an immense role in identifying challenges before the semiconductor industry- from design to manufacturing, to testing and validation. Many of these associations also offer scholarships and fellowships for students and research grants for faculty members. Their publications provide a lot of insight regarding the challenges at present and of the future. These publications may not always have a lot of in depth research material, the sort of which most graduate students are accustomed to, but they successfully paint the bigger picture. Paying attention to such facts can help in keeping research relevant to industry where necessary. Besides, it also helps in learning about the actual real world problems and the challenges involved in translating research into technology that can be scaled up and widely used. Sometimes, problems are considered solved in academic research but such solutions never make it to the market, even if of relevance, because their translation to scalable technology still remains an open problem.

What is there in the word “distance”?

In Education, Mathematics on February 6, 2013 at 9:32 PM

I fist learned about distances in school- class of classical geometry. Of course it was not called classical geometry in school but only geometry. The initial concepts were related to distances between points in a plane, between lines in a plane and between a line and a point in a plane. A plane, as you know, is a  2-dimensional (2D) space. In high school, this concept was extended to 3-dimensional space (3D). The concept of distance basically gives an idea of how far (or how close) are two things (lines, points) from (to) each other. What I learned was “2-norm distance” (the typical Euclidean distance):

2-norm distance = \sqrt{(\displaystyle\sum_{i=1}^{n} |{x_{i}}^2 - {y_{i}}^2|)}

I learned about Hamming Distance during my undergraduate courses on electronics communication. However, it is only during research that I learned a lot more about distances. My first surprise came when I heard about distances in a class on image processing. You can use distances to measure similarity between images! Of course the definitions and methods to calculate those distances were also different. Since then I have learned about distances being one major way of identifying similarities between objects or classes of objects. The central idea behind all these different kinds of distances (not just in image processing) remains the same: to measure  how far the objects are from each other in some respect. For instance, in psychoanalysis “Emotional Distance” is the degree of emotional detachment from some person or events; Czekanovski-Dice distance is used to compare two audio waveforms x and y in time domain etc. If your distance from the world of distances is not big ;), you might want to try reading the Dictionary of Distances.

Experiments in Computer Science/Engineering?

In Design Methodologies, Education, Embedded Systems on February 1, 2013 at 1:35 AM

A friend of mine, who is doing a project on implementing various image processing algorithms (like edge detection, adding colors etc.), was asked by the concerned supervisor to conduct experiments as part of the work. This friend then asked me what the supervisor meant by experiments in this particular case. I was taken aback initially because I had not come across the term experiment being associated with computer science/engineering in a case where the principal job was to implement some algorithms  already developed by someone else and package the implementation as a software. Here, there is no hypothesis to be tested which is an integral part of any experimental science or approach! If the student’s job had been to choose an edge detection algorithm for implementation, by controlled experiments using different kinds of edge detection techniques on the same kind of workload, then that would have  qualified as an experiment.

Nevertheless, I made some suggestions: examine the execution time of the developed software package as the input image size changes; test if there is any dependency based on the image format; test the performance (visual perception of quality of result, execution time) as the amount of information varies across images ( for example an image with a few straight lines/curves with a few orientations vs. an image with hundreds of straight lines/curves with varying orientations). I do not know if the concerned supervisor meant this or something else or the term was used in a loose way to refer to software testing.

However, I decided to explore this topic a little bit more. I found that Stanford University has a graduate level course titled Designing Computer Science Experiments. An excellent paper on what is experimental computer science by Peter J. Denning, a former ACM President, was published in 1980 and can be found here. A good repository of resources is maintained by Prof. Dror Feitelson of Hebrew University, Israel here. Researchers in the field of computer architecture theorize (make a hypothesis) and do a lot of experiments to test their theory.  For instance, people work on different kinds of FPGA architectures to see their benefits and drawbacks. The essential point is that in an experimental approach, one states a hypothesis, conducts experiments and then analyzes the data generated as a result of experiments to test the hypothesis.

Numerical Stability in Calculations

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on January 24, 2013 at 11:44 PM

I did not have any course on algorithms in my undergraduate education. I studied about them (their properties, design etc.) during my research work. I now realize why their study is important for anyone who wants to be really good at designing algorithms or implementing them. After all, algorithms solve problems. I recently came across the subject of numerical stability of algorithms, numerical algorithms to be precise. While algorithms help solve problems, they need to be implemented on a digital machine (a computer for example) which has limited precision. Whatever number system we use, they cannot cover all the numbers present in exact  mathematics. This leads to approximations as well as upper and lower bounds on the numbers that can be represented. Also, approximations can be the source of errors and deviations from the exact numerical answer. For instance, on a machine with only 3 digit precision, numbers like 22, 2.22, 0.110, 100, 177 can be represented. Now if you try to add 2 and 1000 instances of 0.11 , your sum would be 112 on this machine and this matches with the exact answer. Similarly, if you try to add 9 and 9 instances of 0.11, the answer on this machine would be 9.99, which matches with the exact answer. However, if you try to add 10 and 9 instances of 0.11 in that order i.e 10+0.11+0.11…., the machine would return 10 as answer because the moment you try to add 0.11 to 10, you are going to exceed the precision of the machine. Now imagine doing the same calculation in the reverse order i.e adding all the nine 0.11’s first and then 10 i.e. 0.11+0.11+….+10, the machine would return an answer of 0.99 which is far off from the actual answer 10.99 and far worse than the previous approximation of 10 (for the other order of addition). This means that the way you arrange your numbers( in memory, for instance an array) to be added also may influence the sum!! I wish that embedded systems engineers read more on this subject so that the numerical errors that we see cropping up in such systems get reduced. A nice introduction is at wikipedia.

The Internet of Things

In Embedded Systems, Engineering Principles, Interdisciplinary Science on November 29, 2012 at 2:43 PM

When I first attended a presentation on “The Internet of Things”, I was not very excited. It turned out to be nothing more than a glorified description of sensor networks. Though this phrase was first used in 1999 as reported in an article in RFID journal, it has been interpreted in many different ways by different people. Trying to find a way through that maze of descriptions is really difficult. However, after reading a lot about it and based on my own understanding of embedded systems, sensor networks and systems engineering, I would like to share what it means for a non-technical audience. I find it best to explain through examples. Take the case of a smart home. You can control the appliances in your home while driving your car as there is a communication network that links you up with them while you are driving. Your smartphone connects you to the internet where you can shop, play games together with your friends and download apps that make your phone more versatile. It syncs with your email accounts and any sync enabled application, helps you make payments on the go (mobile banking), provide access to your data anywhere through cloud based tools like dropbox etc.. The GPS on your phone helps you find your way in a city by showing you on a city map that has been downloaded on to your phone using a wi-fi or similar data connection. You can drive almost safely even in a city new for you! These examples demonstrate an interaction between humans, electronic devices which may have sensors, mechanical devices and the traditional internet. By traditional internet I mean the internet which was seen initially  as just a repository of information and which has now grown to include processing engines like  those which facilitate “voice enabled search and SMS” on your smartphones, storage and compute space for cloud applications (like Amazon’s EC service) etc. Thus the “Internet of Things” is nothing but a network where human actions, electrical and mechanical devices and the internet come together to interact in a meaningful way. The scope of this interaction can be as varied and wide as possible depending on the intended result.

A Case For Electrical and Eelectronic Measurement

In Design Methodologies, Education, Embedded Systems on October 23, 2012 at 12:36 AM

Perhaps one of the least emphasized part of university education in electrical, electronics or computer engineering is related to the field of electrical and electronic measurements. Electrical measurements generally involve measuring current, voltage and resistance. In an embedded systems that has sensors, such measurements can play a critical role. The output of these sensors are converted to either current or voltage before further processing in software or hardware. Not only to test such a system but also to design it properly, it is important to understand the basic concepts of measurement like accuracy, repeatability, resolution, instrument error, instrument noise, capacitance of cables, probe resistance, instrument calibration etc. I had my first real experience with some really tough measurements to be done on an OC192 board for a telecommunication application while trying to debug some issues. I must say that while we place a lot of emphasis on software and hardware design issues, it is also important to consider the measurement side of the story in order to test if  the software and the hardware are working properly. Measurement concepts like instrument calibration, sensitivity and timing are very important in a test set-up. Sometimes, we miss out these things resulting in a mismatch between requirements and implementation.  Keithley’s Getting back to the Basics of Electrical Measurements is  good for introduction as well as for refreshing one’s basic knowledge.

Error Documentation: Why not?

In Design Methodologies, Embedded Systems on August 27, 2012 at 12:48 PM

I am sure that many of you who have used any software tool that throws up errors have spent time (at one point or another) figuring out what those errors mean. Every software tool that is used in any electronics or software design project throws up errors. Be it is GCC, EDA tools etc. One might have used the support channel of the vendor, user forums, websites like stackoverflow etc. to understand the meaning of those errors. A number of times, these errors do not make any immediate sense to the user. There are also many errors which can be because of multiple reasons. Once one gets a list of these reasons, one has to choose the one that is most likely to be applicable to the case at hand. All this reduces productivity. The time spent searching, gathering and analyzing information could have been better utilized focusing on design. Would it not be better if tool vendors also released documentation on the different kinds of errors that their tools might throw up and the associated reasons? I believe that this “ready-reference” would be very beneficial. After all during the development of those tools, the vendors are indeed aware of why a particular error has been thrown up. Why not just compile all that information in one place and help the user? Also, the errors are not always due to problems in the design source files. Sometimes they are there because the tool expects the user to structure the project, tool inputs etc. in a certain way. Given the complexity of modern EDA and other development tools and the time spent in learning them for effective use, it would only be welcomed if vendors offered this extra level of documentation.

Can a computer do envy-free divison?

In Education, Interdisciplinary Science on July 28, 2012 at 10:15 PM

We have all studied division. In the world of simple mathematics, 8 divided by 2 is always 4.  But what about dividing a cake into 2 equal pieces? A computer program can always divide 8 by 2 and give 4 as answer, but can a computer program divide a cake into 2 equal pieces? Let us make it a bit more complicated. Say the cake has to be divided between persons A and B and in such a way that neither of them feels that the other person got more. This means that neither A or B will envy the share received by the other. So here the notion of equal division has to be understood in the context of the result leading to an envy-free solution. This is the subject of “Fair Division” also known as cake cutting problem. It is studied in politics, mathematics, economics and the like. Methods and algorithms have been proposed to achieve fair division but all require inputs from the parties involved in the division at different stages of the procedure. Note that these inputs need not be disclosed as these could be the feelings/assumptions/conclusions running in the minds of the parties involved. This means that different inputs at different stages can lead to different outcomes. Does it remind of “Observer Effect” in Physics? Yes. The inputs(observation of a current state of division) by a party affects the outcome of division (phenomenon being observed). It is impossible (?) for a computer to solve a problem of this type entirely on its own. Such problems arise routinely in allocation of goods, dispute resolution, negotiation of  treaties etc.

Borrowing terms from economics, a number can be treated as ‘a homogeneous good’ while a cake is essentially ‘a heterogeneous good’ as different parts of it can taste different. Hence, its envy-free division is far more complicated. If you are interested, try to read “Fair Division-From cake-cutting to dispute resolution“, an excellent book by Steven J. Brams (political scientist) and Alan D. Taylor (mathematician).

Teaching Productive Programming

In Education on June 19, 2012 at 11:29 PM

The past semester (Jan-May 2012), I supervised a lab on Data Structures and C for 1st year undergraduates. It was a good experience. The students were very bright and they all did well in their assignments. Recently, I have been doing quite a lot of programming related to electronic design automation as part of my research to test some of our proposed algorithms etc. I came to the realization that while many students are taught programming with a heavy focus on improving their programming skills (smaller code size, faster implementation etc.), there is a lack of focus on teaching them how to manage large codebases. When these students will go out to work in the industry, they won’t be writing just one program, they will be writing many as part of just one project. Understanding how to keep code modular by splitting into multiple source files, a few header files  etc. is very important. It also helps in code reuse, which unfortunately is grossly underemphasized in universities but is a huge practice in the industry. Writing code that can be reused requires skills in including proper comments in code, meaningful naming of variables and functions, maintaining proper code documentation. While some of these are dealt with in pieces here and there, it is important to let students see and apppreciate the need for this process. Reducing effort and increasing productivity is another important issue that is underestimated in teaching. These two need not always be equated with superb coding skills. The ability to write makefiles and compile multiple source files using them, keep directories clean, have separate release and build directories, understand the need for bug tracking systems (like bugzilla etc.) and sourcecode/project/file versioning systems (like Tortoise Hg, Tortoise CVS etc.) are important for a successful, clean and productive project development. Many of these abilities are equally appplicable to both software and hardware development exercises. Things like versioning systems, bug tracking also help in understanding how people work in teams. It is all about team play when it comes to conceptualizaing, designing, building and shipping a product in the market. Take a look at the team size chart in a typical high end product development team (which itself is a combination of multiple sub teams) here and convince yourself!