Till a few years ago nobody could have imagined that the change from Analogue to digital mode would have such a big impact on the entire world and transform all spheres of human life, business, and industry. This has paved the way for the fourth industrial revolution and the information age and created its own world, termed as Cyberworld. It is a virtual world that is not like the real world but still exists. The requirement is just to have internet.
History of Analogue to digital signal
The origin lies somewhere in the 1960s and 1970s, with the invention of the digital computer using Digital Signal Processing (DSP). Since computers were costly in those days, DSP was confined to only four key areas: radar & sonar, oil exploration, space exploration and medical imaging. The proliferation of personal computers in the 80s and 90s resulted in innovative new applications and a shift from the military to the commercial arena. This technical revolution in the early 1980s saw DSP become a part of the undergraduate curriculum. Even though became a part of the electrical engineering curriculum it was a nightmare for the scientists and engineers was required to understand complicated equations, symbols, and unknown terminology.
Difference between Analogue and digital
Both are common as mediums of communication and information. Both are a wave having different properties. The major difference between both signals is that Analogue signals have continuous electrical signals, while digital signals have non-continuous electrical signals.
Analogue signals are continuous signals denoted by sine waves that prove the presence of physical measurements. Analogue signal processing can be done in real-time and consumes less bandwidth. Analogue hardware is not flexible. The Analogue instrument requires large power. Analogue instruments generally have a scale that is confined to the lower end and generates considerable observational errors.
Digital signals are distinct time signals, square waves. It is generated by digital modulation signifying separate or irregular values and its hardware would be flexible in operation. Its use is appropriate for audio and video transmission, computing, and digital electronics. Digital signal processing can be done in real-time and consumes more bandwidth. Memory stored in the form of wave signal in the form of binary bit requires only nominal power errors.
Digital revolution & invention of the computer:
In 1940, the first semiconductor transistors and first digital electronic calculating machines were developed during World War II. This was followed by the silicon-based MOSFET (also known as MOS transistor) and Monolithic integrated circuit (IC) chip technologies in the 50s and the microprocessor and the microcomputer revolution in the 1970s.
A computer is just like appliances but it works on the programmed to decipher sequences of arithmetic or logical operation without any manual intervention. Latest computers are able to work generic sets of operations known as programs. Based on these programs, it is possible to carry out a wide range of tasks. A computer system includes the hardware, operating system (or termed as main software), and peripheral equipment required to usefully. It is now possible to join a group of computers that are linked and function together, e.g. a computer network.
Initially, computers were used for calculations purposes only. In the past three industrial revolutions, different mechanical devices were used to replace hard tasks automation. Since then, the speed, power, and usefulness of computers have been increased beyond imagination. Some special-purpose devices like microwave ovens, remote controls, security alarm systems, industrial robots, and computer-aided design, and personal usable general-purpose devices like personal computers, smartphones, internet, which links hundreds of millions of other computers.
Conventionally, a modern computer consists of at least one processing element, a Central processing unit (CPU) in the form of microprocessors added by computer memory (semiconductor memory chips). The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations responding to stored information, data, and peripheral devices consisting of input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touch screen. Peripheral devices enable information to be retrieved from an external source and it performed the result of operations to be saved and retrieved
The prediction of Moore’s law became reality now known as Digital Revolution during the late 20th to early 21st centuries.
It is framed by the co-founder of Intel in 1965, Gordon Moore predicted that the number of components on a computer chip would double every two years. More accurately, Moore predicted that the number of transistors placed on a single square inch of an integrated circuit chip would double every two years. The cost of a computer will be halved. The other tenet of Moore’s Law says that the growth of microprocessors will be is exponential. It continues to have a significant impact on the electronics sector as the fundamental principle that guides the course of modern computing and the semiconductor industry.
Industry Implications of Moore’s Law
Moore’s law was first published in Electronics magazine in 1965 when Moore was a founder and director of research at Fairchild Semiconductor. While he did not pay much heed to his own predictive statement, it continues to stand as a technological benchmark for the semiconductor industry. The importance of the law to semiconductor manufacturers is evident. The semiconductor manufacturing industry has created a predictive roadmap that spans nearly five decades from 1971 through 2020. This set of documents is titled “The International Technology Roadmap for Semiconductors.” This roadmap was established by five geographic regions that represent nearly all chip manufacturers. Consequently, all decisions about future product releases and research efforts are based on the two-year window of Moore’s law.
Economic Implications of Moore’s Law
The impact on the economy will be computing devices continually grow exponentially having wide complications and computing power and cost of the same will be reduced to the manufacturer and the end-user, the consumer. Interdisciplinary research bodies keep improvements and innovations in the process of chemical mechanical planarization, an abrasive cleaning technique used in manufacturing integrated circuits that optimizes the cost and efficiency of the chip. Consequently, the lowered cost of manufacturing and the increased reliability of new technology nodes has resulted in significant improvement in the equity and operative of the semiconductor industry in the electronics sector. The sweeping inferences of Moore’s law are understood in the progress of cloud computing and social media technologies for increasing computing competencies which are directly bearing on the demand for more components on a single chip. The economic bonding among equipment manufacturers, chip manufacturers, and the consumer market continue to be the inclined potential of the industry to maintain a pace to pace with the dictums of Moore’s law.
The far-reaching status of this law is featured by the truth that it has triggered a technological migration from microelectronics to Nanoelectronics and created an industry segment known as nanotechnology bring exponential growth. This paradigm shift is paving migration into a profound interest in new areas, including nano-materials and new optimization technologies for semiconductor manufactures. Still, the law is apt for the industry until the invention of any unparallel technological product. So our future shaper of the advancement in electronic product, which becomes obsolete very soon after replacing with advance features.
History of Electronics and Its Development
The beginning in the field of electronics’ made with the invention of vacuum diode by J.A. Fleming, in 1897. Subsequently, a vacuum triode was invented by Lee De Forest to amplify electrical signals. This paved the way for the advent of tetrode and pentode tubes which were used widely throughout the world until World War II.
After World War II, in 1948, the Noble prize was given to the inventor of the junction transistor. This was replaced by a vacuum tube which was bulky and consumes high power for its operation. When the use of germanium and silicon semiconductor materials for making these transistors was widely accepted in different electronic circuits, which t replaced by the invention of the Integrated circuit(IC), radically changed the electronic circuits’ nature as the entire electronic circuit got integrated on a single chip, which resulted in low: cost, size, and weight electronic devices. From 1958 to 1975, the start of IC with enhanced capabilities of several thousand components on a single chip such as small-scale integration, medium-large scale, and very-large-scale integration ICs.
The pioneer of microprocessors in 1969 by Intel was the radical changes in all these components. Swiftly, the Analogue integrated circuits were developed that familiarised an operational amplifier for Analogue signal processing. These Analogue circuits consist of Analogue multipliers, ADC and DAC converters, and Analogue filters.