## Role of Industrial Consortia in Education and Research

In Education, Embedded Systems, Industrial Consortia, Research and Development on February 8, 2013 at 6:58 PM

A Google search will reveal the existence of quite a few influential industrial consortia further the cause of research and education in fields identified by them. Almost all of them are run jointly by people from industry and prominent educational and research institutions. You can find a list of them compiled here. I have listed only the ones relevant to electronics and computer industries. I have found that not many students are aware of these consortia and that should not be the case. Some of these are highly active and they contribute a lot to research, development of technology and education. Consortia like Accellera Systems Intiative have contributed to a number of IEEE standards. Some of these can be downloaded for free from its website. The Semiconductor Research Association plays an important role in promoting research and education in the field of semiconductors. The International Technology Roadmap for Semiconductors has played an immense role in identifying challenges before the semiconductor industry- from design to manufacturing, to testing and validation. Many of these associations also offer scholarships and fellowships for students and research grants for faculty members. Their publications provide a lot of insight regarding the challenges at present and of the future. These publications may not always have a lot of in depth research material, the sort of which most graduate students are accustomed to, but they successfully paint the bigger picture. Paying attention to such facts can help in keeping research relevant to industry where necessary. Besides, it also helps in learning about the actual real world problems and the challenges involved in translating research into technology that can be scaled up and widely used. Sometimes, problems are considered solved in academic research but such solutions never make it to the market, even if of relevance, because their translation to scalable technology still remains an open problem.

## What is there in the word “distance”?

In Education, Mathematics on February 6, 2013 at 9:32 PM

I fist learned about distances in school- class of classical geometry. Of course it was not called classical geometry in school but only geometry. The initial concepts were related to distances between points in a plane, between lines in a plane and between a line and a point in a plane. A plane, as you know, is a  2-dimensional (2D) space. In high school, this concept was extended to 3-dimensional space (3D). The concept of distance basically gives an idea of how far (or how close) are two things (lines, points) from (to) each other. What I learned was “2-norm distance” (the typical Euclidean distance):

$2-norm distance = \sqrt{(\displaystyle\sum_{i=1}^{n} |{x_{i}}^2 - {y_{i}}^2|)}$

I learned about Hamming Distance during my undergraduate courses on electronics communication. However, it is only during research that I learned a lot more about distances. My first surprise came when I heard about distances in a class on image processing. You can use distances to measure similarity between images! Of course the definitions and methods to calculate those distances were also different. Since then I have learned about distances being one major way of identifying similarities between objects or classes of objects. The central idea behind all these different kinds of distances (not just in image processing) remains the same: to measure  how far the objects are from each other in some respect. For instance, in psychoanalysis “Emotional Distance” is the degree of emotional detachment from some person or events; Czekanovski-Dice distance is used to compare two audio waveforms x and y in time domain etc. If your distance from the world of distances is not big ;), you might want to try reading the Dictionary of Distances.

## Experiments in Computer Science/Engineering?

In Design Methodologies, Education, Embedded Systems on February 1, 2013 at 1:35 AM

A friend of mine, who is doing a project on implementing various image processing algorithms (like edge detection, adding colors etc.), was asked by the concerned supervisor to conduct experiments as part of the work. This friend then asked me what the supervisor meant by experiments in this particular case. I was taken aback initially because I had not come across the term experiment being associated with computer science/engineering in a case where the principal job was to implement some algorithms  already developed by someone else and package the implementation as a software. Here, there is no hypothesis to be tested which is an integral part of any experimental science or approach! If the student’s job had been to choose an edge detection algorithm for implementation, by controlled experiments using different kinds of edge detection techniques on the same kind of workload, then that would have  qualified as an experiment.

Nevertheless, I made some suggestions: examine the execution time of the developed software package as the input image size changes; test if there is any dependency based on the image format; test the performance (visual perception of quality of result, execution time) as the amount of information varies across images ( for example an image with a few straight lines/curves with a few orientations vs. an image with hundreds of straight lines/curves with varying orientations). I do not know if the concerned supervisor meant this or something else or the term was used in a loose way to refer to software testing.

However, I decided to explore this topic a little bit more. I found that Stanford University has a graduate level course titled Designing Computer Science Experiments. An excellent paper on what is experimental computer science by Peter J. Denning, a former ACM President, was published in 1980 and can be found here. A good repository of resources is maintained by Prof. Dror Feitelson of Hebrew University, Israel here. Researchers in the field of computer architecture theorize (make a hypothesis) and do a lot of experiments to test their theory.  For instance, people work on different kinds of FPGA architectures to see their benefits and drawbacks. The essential point is that in an experimental approach, one states a hypothesis, conducts experiments and then analyzes the data generated as a result of experiments to test the hypothesis.

## Numerical Stability in Calculations

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on January 24, 2013 at 11:44 PM

## The Internet of Things

In Embedded Systems, Engineering Principles, Interdisciplinary Science on November 29, 2012 at 2:43 PM

## A Case For Electrical and Eelectronic Measurement

In Design Methodologies, Education, Embedded Systems on October 23, 2012 at 12:36 AM

Perhaps one of the least emphasized part of university education in electrical, electronics or computer engineering is related to the field of electrical and electronic measurements. Electrical measurements generally involve measuring current, voltage and resistance. In an embedded systems that has sensors, such measurements can play a critical role. The output of these sensors are converted to either current or voltage before further processing in software or hardware. Not only to test such a system but also to design it properly, it is important to understand the basic concepts of measurement like accuracy, repeatability, resolution, instrument error, instrument noise, capacitance of cables, probe resistance, instrument calibration etc. I had my first real experience with some really tough measurements to be done on an OC192 board for a telecommunication application while trying to debug some issues. I must say that while we place a lot of emphasis on software and hardware design issues, it is also important to consider the measurement side of the story in order to test if  the software and the hardware are working properly. Measurement concepts like instrument calibration, sensitivity and timing are very important in a test set-up. Sometimes, we miss out these things resulting in a mismatch between requirements and implementation.  Keithley’s Getting back to the Basics of Electrical Measurements is  good for introduction as well as for refreshing one’s basic knowledge.

## Error Documentation: Why not?

In Design Methodologies, Embedded Systems on August 27, 2012 at 12:48 PM

I am sure that many of you who have used any software tool that throws up errors have spent time (at one point or another) figuring out what those errors mean. Every software tool that is used in any electronics or software design project throws up errors. Be it is GCC, EDA tools etc. One might have used the support channel of the vendor, user forums, websites like stackoverflow etc. to understand the meaning of those errors. A number of times, these errors do not make any immediate sense to the user. There are also many errors which can be because of multiple reasons. Once one gets a list of these reasons, one has to choose the one that is most likely to be applicable to the case at hand. All this reduces productivity. The time spent searching, gathering and analyzing information could have been better utilized focusing on design. Would it not be better if tool vendors also released documentation on the different kinds of errors that their tools might throw up and the associated reasons? I believe that this “ready-reference” would be very beneficial. After all during the development of those tools, the vendors are indeed aware of why a particular error has been thrown up. Why not just compile all that information in one place and help the user? Also, the errors are not always due to problems in the design source files. Sometimes they are there because the tool expects the user to structure the project, tool inputs etc. in a certain way. Given the complexity of modern EDA and other development tools and the time spent in learning them for effective use, it would only be welcomed if vendors offered this extra level of documentation.

## Can a computer do envy-free divison?

In Education, Interdisciplinary Science on July 28, 2012 at 10:15 PM

We have all studied division. In the world of simple mathematics, 8 divided by 2 is always 4.  But what about dividing a cake into 2 equal pieces? A computer program can always divide 8 by 2 and give 4 as answer, but can a computer program divide a cake into 2 equal pieces? Let us make it a bit more complicated. Say the cake has to be divided between persons A and B and in such a way that neither of them feels that the other person got more. This means that neither A or B will envy the share received by the other. So here the notion of equal division has to be understood in the context of the result leading to an envy-free solution. This is the subject of “Fair Division” also known as cake cutting problem. It is studied in politics, mathematics, economics and the like. Methods and algorithms have been proposed to achieve fair division but all require inputs from the parties involved in the division at different stages of the procedure. Note that these inputs need not be disclosed as these could be the feelings/assumptions/conclusions running in the minds of the parties involved. This means that different inputs at different stages can lead to different outcomes. Does it remind of “Observer Effect” in Physics? Yes. The inputs(observation of a current state of division) by a party affects the outcome of division (phenomenon being observed). It is impossible (?) for a computer to solve a problem of this type entirely on its own. Such problems arise routinely in allocation of goods, dispute resolution, negotiation of  treaties etc.

Borrowing terms from economics, a number can be treated as ‘a homogeneous good’ while a cake is essentially ‘a heterogeneous good’ as different parts of it can taste different. Hence, its envy-free division is far more complicated. If you are interested, try to read “Fair Division-From cake-cutting to dispute resolution“, an excellent book by Steven J. Brams (political scientist) and Alan D. Taylor (mathematician).

## Teaching Productive Programming

In Education on June 19, 2012 at 11:29 PM

The past semester (Jan-May 2012), I supervised a lab on Data Structures and C for 1st year undergraduates. It was a good experience. The students were very bright and they all did well in their assignments. Recently, I have been doing quite a lot of programming related to electronic design automation as part of my research to test some of our proposed algorithms etc. I came to the realization that while many students are taught programming with a heavy focus on improving their programming skills (smaller code size, faster implementation etc.), there is a lack of focus on teaching them how to manage large codebases. When these students will go out to work in the industry, they won’t be writing just one program, they will be writing many as part of just one project. Understanding how to keep code modular by splitting into multiple source files, a few header files  etc. is very important. It also helps in code reuse, which unfortunately is grossly underemphasized in universities but is a huge practice in the industry. Writing code that can be reused requires skills in including proper comments in code, meaningful naming of variables and functions, maintaining proper code documentation. While some of these are dealt with in pieces here and there, it is important to let students see and apppreciate the need for this process. Reducing effort and increasing productivity is another important issue that is underestimated in teaching. These two need not always be equated with superb coding skills. The ability to write makefiles and compile multiple source files using them, keep directories clean, have separate release and build directories, understand the need for bug tracking systems (like bugzilla etc.) and sourcecode/project/file versioning systems (like Tortoise Hg, Tortoise CVS etc.) are important for a successful, clean and productive project development. Many of these abilities are equally appplicable to both software and hardware development exercises. Things like versioning systems, bug tracking also help in understanding how people work in teams. It is all about team play when it comes to conceptualizaing, designing, building and shipping a product in the market. Take a look at the team size chart in a typical high end product development team (which itself is a combination of multiple sub teams) here and convince yourself!