Well, it is tough to define what we call as Algorithm ( Read, Algo’s) in exact words, to me it is simply an instruction set arranged in a well-defined, structured sequence of programmable logical steps and /or loops to perform computation for solving an identified problem. Over the last 200 years the definition of algorithm has become more complicated and detailed as researchers have tried to pin down the term, yet it is evolving due to nature of the usage and deployments beyond its classical means. Often used as specifications for performing complex/iterative calculations, on large-scale or real-time data processing or even automated reasoning, machine learning tasks in the application workflow or business logic thereof.
In modern computing, algorithms are used widely ranging from forensics, forecasting ,financial modelling and futuristic predictions while using emerging technologies such as Machine Learning and Data Science. In true sense they extend the business logic given the set of iterative computations and enable advanced analytics with speed and accuracy. An algorithm has the following properties:
Algorithmic problems are often presented using standard English language, in terms of real-world objects and references, Algorithm designers may restate these problems in terms of formal, abstract, mathematical objects such as numbers, arrays, lists, graphs, trees etc to reason out formally.
Most of the algorithms are complex and proprietary nature of many algorithms, current lack of standards and tools, the speeds at which algorithms operate provides limited controls and governance on the pseudocodes operating within them and if any of these go out of the hand resulting in adverse implications of reputational, financial, operational, regulatory, technology, and competitive risks as well. Of course there are checks and balances in place for via structured analysis of algorithmsand evaluation metrics such as classification accuracy, F1 scores, log loss, confusion matrix for performance measures of false positives or negatives etc, in line with the profound data science and analytics practices thus is not a life and death situation for sure.
Algorithm life cycle includes design/problem definition, writing, testing and analysing phases. The skills required to effectively design and analyse algorithms are entangled with the skills required to effectively describe algorithms, the comprehensiveness of any algorithm has four key tenets..
Data structures and algorithms have key dependencies on each other and requires consistency and regular practice to gain conceptual depth and knowledge. A data structureis a way to store and organize data in order to facilitate access and modifications in a named location that can be used to store and organize data, even an integer or floating point number stored can be viewed as a simple data structure. It is essential knowledge to be able to write algorithms aligned to appropriate data structure or data set without which appropriate of steps to solve a particular problem can never be achieved thus learning and understanding .
During my research on the topic, I came across an interesting book on Algorithms by jeff Erickson where he mentions the origin of algorithms quoting the “Hindu-Arabic” numeric system, oldest survivingdescriptions of the algorithm appear in The Mathematical Classic of Sunzi, written in China between the 3rd and 4th centuries, and in Eutocia of Ascalon commentaries on Archimedes’ Measurement of the Circle but there is evidence that the algorithm was known much earlier. Described in Euclid’s Elements centuries ago, for multiplying or dividing two magnitudes.
Interestingly the roots can even be traced to ancient Indians as well, including Brahmagupta’s 7th-century treatise Brāhmasphuṭasiddhānta, Doctrine of Brahma is the main work of Brahmagupta, written c. 628 and a faster divide-and-conquer method, originally proposed by the Indian prosodist Pingalain the 2nd century , which used recursive formulas and is the earliest examples of recursion more than 2000 years ago, in the study of poetic meter, or prosody. Classical Sanskrit poetry distinguishes between two types of syllables (ak.sara): light (laghu) and heavy (guru). In one class of meters, variously called ma ̄tra ̄vr.tta or ma ̄tra ̄chandas, each line of poetry consists of a fixed number of “beats” (ma ̄tra ̄), where each light syllable lasts one beat and each heavy syllable lasts two beats. The formal study of ma ̄tra ̄-vr.tta dates back to the Chandaḥśāstra written by the Acharya Pingala between 600 bce and 200 bce contains the basic ideas of Fibonacci numbers.
As on today, Algorithms power the heart of computations, we can see several algorithms working to solve our daily life problems from social media networks, GPS applications, search engines, e-commerce platforms, recommendation systems, video surveillance etc. applications are powered by various algorithms coupled with modern data structures. Recently there are mentions of Algorithmic entities refer to autonomous algorithms that operate without human control or interference. Recently, attention is being given to the idea of algorithmic entities being granted (partial or full) legal personhood and the accompanying rights and obligations. Algorithmic World has certainly moved on…
As discussed above there exist various kinds of algorithms to solve different problems via different approaches, in programming few are considered as the important Algorithms to solve a particular problem.
Yes, there are different school of thoughts on the biases inherited by few algorithms risking the accuracy or integrity of outcomes causing inappropriate and wrong conclusions or insights as well. Complexity and lack of governance or standards, errors in algorithm designs, inappropriate usage or testing etc are subject to the risks and biases etc and may skew the results and accuracy in part of full.
Math-washing is a term coined to represent the obsession for math and algorithms and human psychological tendency to believe the truth of something more easily if there is math or jargon associated with it even though the values may be arbitrary or assumed. Since machine learning algorithms are trained based on given datasets to recognize and leverage patterns, associations, and correlations in the statistics and may inherit biases from the data analyst or scientist creating or ‘curating’ these datasets.
For example word embedding is a technique to Identify association of words via vectors, the group or the association depending on the angle of the vector, the machine would be able to understand the meaning of the word, in addition to commonly associated words and correlations. Using these vectors make up the dictionary of words for algorithms. Word embedding is widely used in many common applications, including translation services, search, and text autocomplete suggestions, thus any wrong association, phrases or bias that may arise naturally based on the culture, language and regional beliefs of the human engineer used during training of these algorithms based models and may perpetuate the bias since machine learning is prone to being stuck in feedback loops reinforcing its own learning.
Unfortunately, we humans are not as smart or neutral to above characteristics of language, phrases and beliefs etc in order for the algorithm to work the way they should. The content or the datasets used for real time learning may be an outrage on the social media, conversations and even the fake news or celebrity gossip, political slander, and many other things that serve no purpose to the expected outcome but because these algorithms can’t understand that, these echo chambers form, and it continues on as a structured bias unnoticeable to humans.
These risks must be dealt with enhanced quality checks and modernising risk management processes with an eye on governance on assumptions, approach and design, development and deployment thereof. It’s not as simple as it sounds since today’s algorithm-based decision-making systems becoming more prevalent and integral to digital landscape and have complexity, unpredictability, and proprietary nature of algorithms. If we look from the development angle most of these algorithms are based on advanced technologies such as machine learning and evolve over time depending on the input data, veracity and volume of data sets required for inferencing etc thus predicting or explaining algorithm behaviours is difficult and at times not possible at all. My quick read recommendation would be “ How algorithms are controlling your life” dialogue with writer Hannah Fry, a mathematician at University College London to make my point.
A standard traditional point in time process of risk management process will not be effective and needs continuous monitoring and corrective actions to stay on course. The best way to keep track of accountability is to maintain accurate and detailed records the data by which the decisions come to be made need to be transparent and easily auditable so that incase of something going wrong, audit and quality checks can be applied to measure the skew or dis-orientation of the intended results and be able to retrace the steps leading up to the outcome to locate the source of the problem, this is of course iterative endeavour.
In Summary From Ancient world to the modern, humankinds are fascinated with number and the pseudocodes, from cyphers to autonomous algorithms we have witnessed miracles of computations that are impacting our lives and influencing our wellbeing, more so the next generation of machines are learning, augmenting and executing our automation agenda powered by Algorithms helping us to leap forward towards sustainability and greater good of our digital self.
Nov 2021. Compilation from various publicly available internet sources, authors views are personal.
#Algorithms #Machinelearning #datastructures #AIML.