A history of software development as an adjunct to hardware development

While the first computers of the mid-20th century all had elements of software development with computer code and algorithms that solved complex problems, the development of software initially was more of an adjunct to developing computer hardware.

Software programs were rigid and akin to scripting languages like Bash.[1] Programs were written in binary for mainframe computers to be stored in memory, started and stopped.

Coding was rudimentary and the first programming languages like Fortran and Lisp emerged in the 1950s, shortly followed by Pascal, C, Ada and Fortran emerged as a need to overcome the “software crisis” [2]. In the 1970s, some precursors to the web were the emergence of word processing software [2], with Microsoft’s Word becoming a dominant player.

In the 1980s, IBM emerged as an important player introducing personal computers and increasing the demand for software like Word and graphical user interfaces (GUIs). Object-oriented programming became popular, modifying procedures from procedural or functional programming to make software packages more efficient and extensible.

Right up to the 1990s, software development projects were costly, not only exceeding time and budgets, but also there were dangers of physical safety with machines overheating causing dangers to property and human life. Software was not designed to manage memory and resources that caused the heating of processors. Poor software design allowed hackers to steal information.

Early software was often designed by women, with hardware design seen as the domain of men[3]. The rise of the internet in the 1990s led to the development of early software packages.

Margaret Hamilton[4], the American abstract mathematician and software developer, who worked on the Apollo project, at the Michigan Institute of Technology (MIT), is credited for coining the term software engineer[5]. During the project, she developed systems designs and process modelling to maximise software reliability and reuse. She is credited for establishing end-to-end testing techniques and software lifecycle management techniques.

Language creation, cloud computing and program design into AI and machine learning

It was in the 1990s that the core of the modern software languages were written, Python in 1991, Java and JavaScript in 1995. Universities started offering computer science as a degree and Hamilton’s early work on software lifecycle management and testing methods became established.

In the year 2000, methodologies like Agile, and Lean replaced the old extreme programming and waterfall models of software development. In tandem, cloud computing allowed developers to access virtualised resources resulting in faster deployment of software[6].

By 2010 with the rapid disruption of mobile phones and the miniaturisation of chips, software was developed not only for computers but for mobile applications[7]. Mobile app stores made it easier for developers to commercialise their software.

With near-field communication, embedded systems and programs written for these systems use languages more akin to early programming languages like C.

All this led to a proliferation of “big data” and the need to manage this with machine learning and in today’s world, AI that builds models to allow computers to learn to pattern match and develop algorithms that are based on the underlying data.[8]

Software languages versus natural languages

Given the initial need for computers to compute mathematical tasks, computer languages developed a language structure of their own - a small vocabulary, limited room for ambiguity in the instructions and the ability to logically flow through a binary - yes-no - lexical tree.

Computer languages use a context-free grammar called the Backus-Naur Form (BNF)[9].

BNF uses numbers and symbols as a form of pattern matching rather than words and syllables. This lexical tree is parsed to validate its instructions. Lexical parsing uses recursion - or restating the base case problem statement over and over again until the solution is reached. The restatement narrows down the options till the base statement is proven correct.

This form of parsing uses the recursive algorithm which must eventually stop calling itself; otherwise, it will only stop when it has used up all of the available memory. Finally, the call runs out of road on the stack leading to “stack overflow”.

From simple assembly to complex packaging, bundling and compiling

Early languages, like Pascal, merely assembled code to be parsed and read and then executed in memory. In today’s world, all of these are abstracted away from the end user.

Higher-level languages have several parts

  • Code creation before code gets executed, a high-level language can be written in an integrated development environment (IDE) or a text editor. IDEs come with runtime overheads but save time as they can perform many pre-compiling tasks and debug code before it has run

  • Pre-compiling, lexical analysis to scan streams of characters, discard white spaces and comments, linting and tokenization of characters to form a single unit by the source language, while this code is running errors are handled and a runtime error generated

  • Code compiling (including debugging and code optimising) where syntax is analysed again looking for clauses and sentences according to the grammar rules set by the language and accepting the syntax tree or creating it if it has not been created at the pre-compile stage

  • Code is then chunked and bundled together to be processed in streams. In this stage symbol tables are used to resolve issues, relative addresses of variables are generated, and the code is prepared for assembly

  • At the assembly stage code is boiled down to binary where linking and loading of the process starts to generate executable code

  • At this stage when the code is being compiled errors are handled and reported back as compile-time errors

The software development cycle

Using core concepts from the Hamilton model, the software development cycle is organised into the stages of

  1. Requirements mapping
  2. High-level design and architecture (may overlap with detailed design)
  3. Detailed design and implementation (some overlap with the design phase in the detail of implementation and any adjustments to be made)
  4. Testing
  5. Release and maintenance

These may be executed one after the other - in what is known as the waterfall model, or iteratively in the Agile methodology. The V model allows linear progression up and down the scale as well as at vertical nodes if required[10].

The aim of the software development cycle, which is constantly evolving is to release software as quickly, efficiently and accurately as possible because of the growing demand for software to solve problems in an abstract format and relieve the tedium of monotonous jobs and routines.

What constitutes a program

A software package or program, today is a compiled or pre-compiled package of code blocks known as the source code. Source code is compiled into an object file and object files are linked with libraries to make the program run. This packaged file is executable and can be run on a computer, mobile application, gaming application or embedded system.

Source code can be written in one or several languages called the tech-stack. Code is constantly maintained and updated with packages labelled into major versions, minor updates and patches with a numbering system. For example, Word Word 14.90.2 describes 14 as the major update, 90 the minor updates and 2 as the patches.

Software, if it is not open-source allowing free access to the licence to use it, will describe terms of use.

Linux, is often used as the base for studying operating systems because the systems that run operating systems that use Linux are open source. Unix, Windows, Android and Apple all have their own commercial software. This makes understanding their operating systems less transparent.

EXTERNAL REFERENCES - Software development an adjunct to hardware development