sharadsinha

Archive for the ‘Engineering Principles’ Category

Soilless Farming and “Re”search

In Education, Engineering Principles, Research and Development on June 25, 2013 at 12:12 AM

When I started my PhD, my supervisor, among other things, told me that research is also revisiting the existing concepts and examining them. It is not always about plucking a blue-sky idea from nowhere or dreaming up something like that out of nowhere. That is why it is called “re”search. Based on my experience over the last few years, I now firmly believe in what he said. Very often we try to come up with an idea that we want to sound extraordinary. We want to come up with something that inspires awe and gaze. Nothing wrong in that, except that looking at the history of technological evolution, it can be seen that ideas and technologies that have been considered ground breaking and have held us in thrall, have often come up revisiting the existing concepts. Of course there are those which were the results of serendipity, for instance the discovery of penicillin. But that is not the topic of this post.

By examining closely what is considered common knowledge or given fact, people have made breakthroughs. Agriculture has long been associated with soil based farming. In fact, we seldom talk about agriculture without associating quality of soil with it. Agriculture, as we have known over thousands of years, cannot be practised without soil. However, Dr. Yuichi Mori, a professor in Japan, has re-examined the role of soil and realized that soil can be replaced by a suitable membrane that can provide nutrients to plants and physical support for roots to grow. This is “soil-less agriculture“. His company Mebiol markets the technology called Imec. Not only the technology does not need soil, the hydroponic membrane stores water and nutrients leading to need for less water for plant growth. The membrane may also block some pathogens that cause plant diseases. Field trials have shown that tomatoes, cucumber etc. can be easily grown and grown this way they in fact taste better and richer in nutrients. You can watch his TEDxTokyo talk here.

Amazing, isn’t it? Now I can safely try to grow some of these if I were to live in a land scarce country or in a high rise apartment! Interestingly, the earliest documentary proof of the idea of soil-less agriculture can be found in 1627 book Sylva Sylvarum by Francis Bacon with follow up research by some people over the next few centuries. However, Mebiol is the first company to come up with a technology that can be commercialized.

A Tale of Two Samsung Galaxy S4s

In Design Methodologies, Education, Embedded Systems, Engineering Principles on May 14, 2013 at 7:23 PM

When you are in school or college, you are taught about the best ways to do things. It is generally about a point solution. Alternatives are rarely discussed in detail. One almost always looks for the best answer, the best method, the best algorithm. When you begin to work  for a company, you almost always realize that the best solution is not what one is always looking for. Time and market pressures play a role in choosing solutions. You can choose a solution that suits the “taste of the target market“. When you serve more than one market, then it becomes interesting. Would you want to choose two different solutions for two different markets for the same product? This is one of the reasons that analysts cite regarding what Samsung has done with its Galaxy S4 smart phone. While the US and the Korean versions appear identical on the outside, they use quite a number of different components. Their processors, wireless and image processing architectures are different. Supposedly, the Korean version is faster and has a longer battery life because it uses  Samsung’s Octacore Exynos 5 processor which has an architecture (read here) that helps to attain a balance of power efficient and performance more than the Qualcomm Snapdragon processor in the US version. iSuppli’s IHS Teardown Service reveals all the component level differences between the two designs here.

A more plausible reason for the difference in the two architectures is the fact that the LTE bands supported by mobile operators in US and Korea are different (see here). The two processors (essentially system on chips in this case) may not support both the LTE bands. However, it does illustrate an important point related to engineering product design. It shows that you can design the same product with different architectures. While not related to S4, this analysis reminds me of regulations in certain countries which make it compulsory for a manufacturer to source components from local suppliers for products to be sold in the local market.  An example is here. Therefore, as a manufacturer you can end up with different components in different markets for the same product.

I used to think that a consumer electronic item sold in different countries used the same components. That myth now stands broken! While you can easily spot the differences in software, prominent being the language used in user interface, it is not easy to spot differences in hardware.

The World as a State Machine

In Design Methodologies, Education, Engineering Principles, Mathematics on April 29, 2013 at 9:46 PM

A state machine is basically a model of computation which helps one analyze the effects of input on a system. This system can remain in different states throughout its life cycle though in only one state at a time. It can transition from one state to another depending on some input. Every state machine has a start state and it progresses from there to other states eventually leading to an end state. Note that it is possible to reach the end state from any intermediate state as well as the start state. It depends on the system being modeled. Also, the output of each state may depend on the current state as well the inputs to that state.  Thus state machines model reactive systems i.e. systems which react. A good description of state machines can be found here. Note that the description there is related to finite state machines, which are so called because they have finite number of states. State machines are used in different fields of study not just electrical or computer engineering. They are used in biology, mathematics, linguistics etc. They also have different variants each trying to capture some additional parameters of a system which I would not go into. You can read about them at the link mentioned earlier.

I was wondering if the world can be modeled as a state machineI think that the world in fact is a state machine except that its end state is unknown. Those with absolute faith in cosmological physics would state that the “Big Bang” can be considered as the start  state. Those with religious views might consider something else as the start state.  The beauty of this world being considered as a state machine lies in the fact that it does not matter whether you believe in science or not. It does not matter whether you have more of a religious bent of mind and would like to see the world from a religious or theological perspective or whether you want to see it only from a scientific standpoint. Either way, the world can be modeled as a state machine. You get to choose the start state depending on which viewpoint you are more comfortable with. In both the cases, the world is in fact a reactive system. It can even be considered as an aggregation of interacting state machines where each state machine can represent the economic, social, political, religious and scientific state of the world. And nobody would deny that all these concepts influence each other. Every electrical or computer engineering student studies about Moore and Mealy state machines. To them, the world is probably a Mealy state machine though not strictly so: the outputs in any state that this world resides in is dependent not only on the current inputs but also on the current state. If we look around us, it sounds so true,   does it not? However, this state machine is extremely complex!

What is optimization?

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on April 15, 2013 at 12:04 AM

Perhaps optimization is the most abused word in all of research, engineering work or any task that seeks to maximize some intent. The Merriam-Webster dictionary defines it as “an act, process, or methodology of making something (as a design, system, or decision) as fully perfect, functional, or effective as possible; specifically : the mathematical procedures (as finding the maximum of a function) involved in this”. We can hear about optimizing power, area, a  performance metric like latency etc. Many people pass of every design decision as an optimization strategy. While such decisions may contribute to local optimization, they may fail in achieving global optimization. In fact such optimizations may actually degrade performance when the software or the design is used in a context which was not anticipated or thought of by the original developers. Read here about some insight into optimizing a memory allocator in C++.  You will find another debatable example of optimization to make software run faster here. And here is a nice article on efficiency versus intent. Typically optimization is associated with increasing the efficiency of a design (hardware or software) in some aspect. But such optimizations should not destroy the intent of the design. This requires a bit more analysis on part of the designer/developer to ensure that the intent is not lost. Here is another example.

The field of mathematical optimization, which is related to selecting the most appropriate choice (that satisfies a given set of criteria) from a set of alternatives, is vast and varied. There are numerous techniques suitable for different kinds of problems. You can see the list here. Frankly, it is a tough job to recommend one of these techniques for non-trivial problems. One needs to understand the nature of the problem in detail to make a just recommendation. Understanding the nature of such problems or modeling such problems in a way which is suitable for any of these techniques to be applicable is a non-trivial task. It requires a lot of theoretical and practical insight.

PhD vs Work Experience: The Perennial Debate

In Education, Engineering Principles, Intellectual Property, Research and Development on March 9, 2013 at 11:35 PM

Those of you who have ever considered doing a PhD or getting a higher technical degree would have definitely come across this debate on PhD vs work experience. One can find so many articles and opinion posts on this subject. Many of us tend to evaluate PhD and work experience by replacing one with the other. Setting aside financial considerations, we tend to evaluate these two experiences by examining the worth of each when replaced by the other. I think that this approach is improper. PhD  and work experience can be/made to be complimentary to each other. Not all work experiences are of high quality and same is the case with PhD granting institutions. Not all companies are alike just the way standards differ across institutions of higher learning. I would not be debating the pros and cons of PhD or of work experience in this post as that subject merits far greater analysis than what I can put in a blog post. However, taking a broader view, I would say that a PhD program lets you get out of your comfort zone and explore complex, unbounded problems which could be fundamental or applied in nature. It teaches you to learn, examine (and re-examine), critique, argue and persuade using facts and figures. Its not that there are no corporate jobs where one cannot learn these very things. But they are far and few and the degree to which you need to exercise your brain varies across them. As an example, you can be a great lawyer, corporate, civil or criminal, but being a great lawyer is different from being able to comment, analyse, contribute to the very subject of jurisprudence which gives rise to all judicial activities. Another example: you can be an excellent system on chip architect, but being able to get into the depth of power integrity analysis is a different story. Of course you can be a great power integrity analysis engineer too who can apply all sorts of engineering tricks to perform clean power integrity analysis but you need not be able to comment, analyse or examine the principles on which power integrity analysis is based to the same depth as a typical  PhD degree holder would do. The point I am trying to make is that “there is space and need for both kinds of experiences“. They need not be present  to the same degree in one single person. The utility of a PhD and that of work experience depends on many factors. At the end of the day, you do a PhD because you want to explore, find new things or just sit back and critically reflect on the existing things because other people are  busy meeting the demands of the market which has its own challenges!

How much and what do you read as a researcher?

In Education, Engineering Principles, Interdisciplinary Science, Research and Development on March 3, 2013 at 5:07 PM

What do you read as a researcher? Most of us read only that which is relevant (or we think is relevant) to our research. But is that all that is should be read? I know that many of us do read novels of different kinds of which fiction is more common.

However, as far as reading for research is concerned, most of us read within our specific domain and especially focusing on those works that are closely related to our own. We browse through conference proceedings and journals a lot. Some of us venture into reading patents and online newsletters published like EE Times  etc. Nevertheless we tend to stick to a rather narrow range of topics. We measure the utility of reading something for research against the value that it might bring, in our opinion, to our research. While this is not at all a bad way of doing research, we run the risk of training ourselves to read, think and argue about only a very narrow set of topics even within our own broader research discipline. It is a byproduct that has its negative consequences. It becomes difficult to think beyond what we are most comfortable with and it makes an expert in a very narrow field. We run the risk of not being able to relate our work with the bigger picture and processes. We run the risk of not being able to think at the system level or looking at the same thing from a different perspective. For instance, a mobile phone is a device that has both software and hardware. A software guy will describe it from software perspective while the hardware guy from hardware. Someone who can understand both, even if not every detail, can help merge the two perspectives which is very important for product design!

Oscar Wilde has said, “It is what you read when you don’t have to that determines what you will be when you can’t help it“. It applies not only to life but also to research. Reading about human factors, user interfaces, intellectual property, regulatory practices etc. helps us in seeing same things from different perspectives. It is a great way to exercise our brains.

At the same time,if you are more adventurous , reading about topics in sociology, psychology, economics, politics etc. helps you develop critical thinking abilities borrowed from different domains. An example is here. And if you can see through all of this, you might even be able to solve a problem in your domain by reading about something exciting in another domain.

Numerical Stability in Calculations

In Design Methodologies, Embedded Systems, Engineering Principles, Mathematics on January 24, 2013 at 11:44 PM

I did not have any course on algorithms in my undergraduate education. I studied about them (their properties, design etc.) during my research work. I now realize why their study is important for anyone who wants to be really good at designing algorithms or implementing them. After all, algorithms solve problems. I recently came across the subject of numerical stability of algorithms, numerical algorithms to be precise. While algorithms help solve problems, they need to be implemented on a digital machine (a computer for example) which has limited precision. Whatever number system we use, they cannot cover all the numbers present in exact  mathematics. This leads to approximations as well as upper and lower bounds on the numbers that can be represented. Also, approximations can be the source of errors and deviations from the exact numerical answer. For instance, on a machine with only 3 digit precision, numbers like 22, 2.22, 0.110, 100, 177 can be represented. Now if you try to add 2 and 1000 instances of 0.11 , your sum would be 112 on this machine and this matches with the exact answer. Similarly, if you try to add 9 and 9 instances of 0.11, the answer on this machine would be 9.99, which matches with the exact answer. However, if you try to add 10 and 9 instances of 0.11 in that order i.e 10+0.11+0.11…., the machine would return 10 as answer because the moment you try to add 0.11 to 10, you are going to exceed the precision of the machine. Now imagine doing the same calculation in the reverse order i.e adding all the nine 0.11’s first and then 10 i.e. 0.11+0.11+….+10, the machine would return an answer of 0.99 which is far off from the actual answer 10.99 and far worse than the previous approximation of 10 (for the other order of addition). This means that the way you arrange your numbers( in memory, for instance an array) to be added also may influence the sum!! I wish that embedded systems engineers read more on this subject so that the numerical errors that we see cropping up in such systems get reduced. A nice introduction is at wikipedia.

Velocity, Displacement & Acceleration: Science vs. Engineering

In Design Methodologies, Education, Engineering Principles on December 24, 2012 at 7:04 PM

One often encounters the question: What is the difference between science and engineering? An oft quoted answer is that engineering involves, roughly speaking, an application of science or scientific results borne out of investigation into the nature of matter and its interaction with its surroundings. Science is about acquiring more knowledge and understanding about existing phenomena whereas engineering involves solving problems by applying that knowledge. Therefore, many also hold the view that it is applied science. Well, I won’t get into the debate of engineering vs. science or put before you an essay on this topic in this post. I would just like to highlight an example of where engineering takes over from science. Every student studies the concepts of velocity, acceleration and displacement in elementary Physics classes. These concepts are very simple: velocity is the derivative of displacement with respect to time while acceleration is the derivative of velocity with respect to time. Therefore to get displacement from velocity , one needs to integrate the former with respect to time over a given time period. Similarly, velocity at a certain point in time is the result of integration of acceleration over a given time interval. Now, if one is asked to apply these principles to calculate velocity and displacement using the acceleration data obtained from a transducer mounted on an engine, how would one do it? In this case, the engine vibrates and there is no physical noticeable movement of engine body from one place to another in the traditional sense (like a ball traveling from place A to place B in a field). This is where engineering comes in. An engine is a complex system and its vibrations need not be linear or constant in time. There can be vibrations with low frequencies as well as high frequencies and there can be periods of no vibration at all. In these cases, calculation of displacement or velocity is not straight forward and requires greater insight into the mechanism of vibration as well as the nature of acceleration signal. I would recommend reading up 1, 2 and 3 to get an idea of how interesting and insightful it can become! These are links to articles by Prosig  which works in the area of noise and vibration analysis. Understanding these mechanisms is important for any embedded designer who writes code to measure such parameters using microcontrollers etc.

The Internet of Things

In Embedded Systems, Engineering Principles, Interdisciplinary Science on November 29, 2012 at 2:43 PM

When I first attended a presentation on “The Internet of Things”, I was not very excited. It turned out to be nothing more than a glorified description of sensor networks. Though this phrase was first used in 1999 as reported in an article in RFID journal, it has been interpreted in many different ways by different people. Trying to find a way through that maze of descriptions is really difficult. However, after reading a lot about it and based on my own understanding of embedded systems, sensor networks and systems engineering, I would like to share what it means for a non-technical audience. I find it best to explain through examples. Take the case of a smart home. You can control the appliances in your home while driving your car as there is a communication network that links you up with them while you are driving. Your smartphone connects you to the internet where you can shop, play games together with your friends and download apps that make your phone more versatile. It syncs with your email accounts and any sync enabled application, helps you make payments on the go (mobile banking), provide access to your data anywhere through cloud based tools like dropbox etc.. The GPS on your phone helps you find your way in a city by showing you on a city map that has been downloaded on to your phone using a wi-fi or similar data connection. You can drive almost safely even in a city new for you! These examples demonstrate an interaction between humans, electronic devices which may have sensors, mechanical devices and the traditional internet. By traditional internet I mean the internet which was seen initially  as just a repository of information and which has now grown to include processing engines like  those which facilitate “voice enabled search and SMS” on your smartphones, storage and compute space for cloud applications (like Amazon’s EC service) etc. Thus the “Internet of Things” is nothing but a network where human actions, electrical and mechanical devices and the internet come together to interact in a meaningful way. The scope of this interaction can be as varied and wide as possible depending on the intended result.

A Case of Two Means:Geometric & Arithmetic

In Engineering Principles, Mathematics on November 7, 2012 at 11:55 PM

Why is this post there? I have come across several examples of quoting results (numbers) in papers, reports etc. where the authors have used arithmetic mean. For instance, people would run an application on different computing platforms and then calculate the time taken on each platform. They would present their results in a table and the last column would have an entry titled “mean”. Often, it is the arithmetic mean (AM) that is quoted. How many times have you seen the geometric mean (GM) being quoted? Not many. The primary reason being that we are too comfortable with the arithmetic mean. This is what pops up in our heads generally when we think of a mean. But we forget in the process if AM is the right choice. It is important to understand when to use AM and GM . AM is biased towards large data points in a data set while that is not the case with GM. GM is generally used when several quantities multiply together to produce a result while AM is generally used when they add up to produce a result. Sometimes it is obvious when they add up and when they multiply. Sometimes,it is not so obvious. So you have to put extra effort in finding out which mean to use  and what message you are trying to drive home through that mean value. In the example cited in the beginning, GM should be used. Some nice references to read are : ref1, ref2, ref3, ref4. Similary, understanding when to use Harmonic Mean (HM) is also important. Whichever mean you choose,you have to understand your data points as well as be clear about the message you are trying to convey. Means and averages are very important in economics, mathematical finance etc.