TechPoint's Logo

Artificial Intelligence

Artificial Intelligence (AI) is the umbrella term for the practical use of machines, particularly computer programs, to carry out tasks that normally require human intelligence.

Artificial Intelligence: What Business Leaders Need to Know

Written by Dennis Trinkle

Research Completed by Burak Kulli, Alli McCrady, Emmanuel Quainoo, Alya Vasilevskaya

Artificial Intelligence (AI) has emerged as a revolutionary force that is transforming every sector of society. This comprehensive overview covers key concepts, applications, legal issues, and market trends related to AI. 

AI encompasses a range of technologies, including machine learning, deep learning, and neural networks. Machine learning, a subset of AI, involves algorithms that learn from data, improving over time. Deep learning, a further subset of machine learning, utilizes artificial neural networks to mimic the human brain’s processing patterns. 

AI also encompasses narrow AI, general AI, and super AI, representing different levels of machine intelligence and autonomy. While narrow AI is specialized in performing a specific task, general AI can understand, learn, and apply knowledge across a wide range of tasks. Super AI surpasses human intelligence in practically all economically valuable work. 

AI’s applications extend to automation and robotics, with use cases ranging from self-driving cars to industrial robots. Different levels of automation are defined, with fully autonomous vehicles representing the ultimate goal. The future of AI in robotics is promising, with significant technological advancements on the horizon. 

Generative AI, which includes Large Language Models (LLMs), has opened new possibilities for content creation and image generation. These technologies can generate text, create images based on textual descriptions, and even convert complex legal documents into plain English.  

Commercial applications for these technologies are expanding rapidly across sectors such as marketing, design, and law. 

However, the rise of AI also raises complex legal and ethical issues. Concerns about data privacy, the legal standing of AI, and intellectual property rights associated with AI-generated work are growing. The legal landscape is continually evolving to keep up with these technological advancements. 

The current AI marketplace is experiencing rapid growth, with a surge in startups and major tech companies heavily investing in AI research and development across various sectors. McKinsey’s national data indicates widespread adoption and application of AI across industries and functions. 

Generative AI technologies have brought automation to new heights, with diverse potential for business applications. By 2030, McKinsey estimates that AI and automation could encompass 29.5% of total working hours in the U.S. economy, significantly impacting knowledge workers’ job activities. 

The increasing demand for tech-related jobs is projected to grow by 23 percent by 2030, driven by digitization in various sectors like banking, insurance, pharmaceuticals, and healthcare, requiring professionals with high-level tech skills. 

Opportunities for startups in the AI/ML space are abundant, ranging from industry-specific solutions to NLP applications, computer vision, recommendation systems, cybersecurity, drug discovery, and more. To succeed, startups must address real problems, understand customer needs, and prioritize ethical and responsible AI use. 

AI has numerous use cases across industries. In healthcare, it aids disease diagnosis and personalized medicine. In agriculture, it enables precision farming and automated harvesting. In finance, AI assists with fraud detection and algorithmic trading. Retail benefits from personalized recommendations and inventory management. Transportation and logistics leverage AI for route optimization and autonomous vehicles. In manufacturing, AI aids predictive maintenance and quality control. Customer service benefits from AI chatbots and sentiment analysis, while education uses AI for adaptive learning and plagiarism detection. 

Locally, several Indiana-based companies are actively investing in AI technology. Eli Lilly and Company focuses on streamlining drug discovery, while Cummins utilizes AI for predictive maintenance and quality control. Anthem, Inc. leverages AI to enhance customer service and optimize healthcare plans. Salesforce, with a significant presence in Indianapolis, is a leader in AI with its Einstein analytics tool. Genesys, hc1, and LifeOmic are also among the Indiana companies utilizing AI for enhancing customer experience, personalizing healthcare, and providing innovative solutions. 

In conclusion, the AI marketplace is flourishing, offering immense potential for startups and established companies alike to harness the power of AI across various sectors. As automation and AI technologies advance, it is essential for businesses to adapt and invest in tech talent, ensuring they stay competitive in this rapidly evolving landscape. 

AI 101 | Table of Contents

What is Artificial Intelligence?

Artificial Intelligence (AI) is the umbrella term for the practical use of machines, particularly computer programs, to carry out tasks that normally require human intelligence. This includes tasks like creating and interpreting images, recognizing and generating speech, understanding and producing language, using tools, and many other activities that involve perception and action. The phrase “artificial intelligence” was first introduced at a Dartmouth conference in 1956. The technology has been steadily advancing for several decades and now is at an inflection point of capability and adoption.  AI technology is now being utilized across a wide range of sectors, including online advertising, stock trading, healthcare, pharmaceuticals, and robotics. AI’s goal is to boost productivity and efficiency and to accomplish tasks that cannot currently be completed by humans or existing technologies. 

What is machine learning?

Machine learning (ML) is a branch of AI that equips systems with the ability to automatically learn and improve from their experiences without being explicitly programmed. It is the practice of using algorithms to parse data, learn from it, and make a decision or prediction. For instance, ML algorithms can predict stock market trends or suggest items to users on a shopping website based on their past browsing or purchasing history. 

Artificial Intelligence vs Machine Learning.

Artificial Intelligence and Machine Learning are interrelated but distinct concepts. AI is the overarching principle – creating systems capable of acting intelligently, while ML is a method of creating those systems. In essence, all ML is AI, but not all AI utilizes ML. Some AI systems are rule-based and do not learn from data, which distinguishes them from ML systems. 

What Has Contributed to AI’s Sudden Prominence?

Artificial Intelligence is gaining traction in our society, largely due to the enhanced sophistication of approaches and the growth of computational power. The recent leaps in technical abilities, such as increased computing power and advanced graphics processing units, have opened many applications across various sectors. The surge in these capabilities can be traced back to: 

  • The evolution of faster and more efficient computer hardware, which allows for the processing of vast datasets and the execution of intricate operations, like convolutional neural networks. For instance, AI systems today can sift through thousands of medical records in mere seconds to identify symptoms that could signify diseases like cancer. 
  • The Internet, with its vast reservoir of data, has become a valuable asset for training, testing, and deploying AI. The recent advent of shared computational resources, or cloud computing, has optimized resource allocation, enabled quicker scaling, and significantly reduced computational costs. 
  • The advent of sensors that can monitor a wide range of parameters in real time. For example, wearable devices like smartwatches can gather vital signs from any location, aiding in health monitoring and risk prediction. Modern vehicles are equipped with sensors that can supply proactive maintenance alerts to drivers. 
  • The Internet of Things (IoT), bolstered by recent advancements in networking like 5G, allows for real-time data collection from sensors, cloud processing, and deployment for various applications. For example, vehicles can share location signals with each other in low visibility situations to prevent accidents. 

The development of sophisticated learning algorithms and architectures that enable the creation of precise models. These models can better understand historical data and identify recurring hidden patterns. 

How do computers learn?

Machine learning can be categorized into several types. Unsupervised learning involves the automatic detection of similarities among data points. Supervised learning relies on human-annotated data. Weakly-supervised learning uses semi-automatically created annotations, while reinforcement learning employs a “reward function” to supply feedback to systems after they attempt to achieve a specific goal. There are also derived models such as adversarial learning, where two models learn by challenging each other. Understanding these learning models can help businesses better leverage AI technologies for their specific needs. 

Supervised Learning.

Supervised learning is a type of machine learning where an algorithm learns from labeled training data and makes predictions based on that data. In this context, “labeled” means that each example in the training dataset is paired with an “answer” or “output” that the model can learn from. 

For instance, if you’re training a machine learning model to recognize images of cats, you would provide it with a large number of images, some of which are labeled as “cat” and others as “not cat”. The model would then learn the characteristics that differentiate cat images from non-cat images. 

There are two main types of supervised learning problems: 

  • Regression: The output is a continuous value. For example, predicting the price of a house based on features like its size, location, number of rooms, etc. 
  • Classification: The output is a category. For example, determining whether an email is spam or not spam. 

In both cases, the goal of supervised learning is to accurately map input data to the correct output label, and to be able to make accurate predictions for new, unseen data. 

Unsupervised Learning.

Unsupervised learning is a type of machine learning that uses input data without labeled responses. The goal is to model the underlying structure or distribution in the data to learn more about the data itself. 

In unsupervised learning, the algorithms are left to themselves to discover interesting structures in the data. They might find clusters of similar data points (clustering), detect anomalies that stand out from the rest (anomaly detection), or simplify data into smaller, more manageable parts (dimensionality reduction). 

A common example of unsupervised learning is a recommendation system. For instance, if you’ve ever used a shopping website and seen a section that says, “Customers who bought this item also bought…”, that’s likely the result of unsupervised learning. The system groups together customers who have similar purchasing behaviors and uses this information to make recommendations. 

Weakly Supervised Learning.

Weakly supervised learning, also known as semi-supervised learning, is a type of machine learning where the training data is only partially labeled, or the labels are not entirely accurate or reliable. This approach is often used when there is a large amount of data, but it is too time-consuming or costly to label all of it accurately. For instance, in the context of image classification, a weakly supervised learning model might only know which images contain the object of interest (say, a dog), but not the exact location or size of the dog within each image. The model then must learn not just to recognize dogs, but also to infer the missing detailed annotations from the available weak labels. 

Reinforcement Learning.

Reinforcement learning is a type of machine learning where an agent learns to behave in an environment, by performing actions and seeing the results. The learning system, in this case, is referred to as an agent, learns from the consequences of its actions, rather than from being explicitly taught and it selects its actions based on its past experiences (exploitation) and also by new choices (exploration), which is essentially trial and error learning. 

What is Deep Learning?

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. It is a technique for implementing machine learning that uses artificial neural networks to model and understand complex patterns in datasets. It’s called “deep” learning because it uses deep neural networks – structures with multiple layers of artificial neurons, or “nodes”, each of which contributes to the model’s ability to recognize patterns in data. 

Deep learning models are designed to automatically and adaptively learn to represent data by training on a large amount of data and using the derived representations to perform tasks like image recognition, speech recognition, natural language understanding, and many others. They are capable of learning from unstructured and unlabeled data, making them even more valuable in the field of AI. 

The architecture of a deep learning model involves multiple layers of connected processing units. Each layer uses the output from the previous layer as its input. This layered, hierarchical approach to learning from data allows deep learning models to build up a complex understanding of inputs, making them particularly effective for tasks like recognizing objects in images or words in spoken language. 

Deep learning has been responsible for some of the most significant advances in AI in recent years, powering technologies like self-driving cars, voice assistants, personalized recommendations, and more. 

Artificial Intelligence vs Deep Learning.

While AI and deep learning both aim to emulate human intelligence, they do so in different ways. AI as a whole works towards creating machines that can behave intelligently, Deep learning, on the other hand, is a subset of machine learning that uses neural network architectures to model high-level abstractions in data. Thus, while all deep learning is AI, not all AI uses deep learning. Deep learning algorithms automatically learn representations and progressively extract high-level features from raw input data. 

What are neural networks?

Neural networks, also known as artificial neural networks (ANNs), are a key part of artificial intelligence (AI). They are computing systems inspired by the biological neural networks that constitute animal brains. These systems “learn” to perform tasks by considering examples, generally without being programmed with any task-specific rules. 

A neural network takes in inputs, which are then processed in hidden layers using weights that are adjusted during training. Then the model spits out a prediction as the output. The weights are adjusted to find patterns to optimize the outputs to match the target values. This is done using a method called backpropagation, which involves calculating the gradient (the direction and rate at which the value of an expression changes) and adjusting the weights in that direction. 

Neural networks can be used for various applications including but not limited to image recognition, speech recognition, natural language processing, and even playing video games and driving autonomous vehicles. They are the technology behind many of the advancements we see in AI today. 

What are the different types of neural networks?

There are several types of neural networks, each with its design and specific use case: 

  • Feedforward Neural Networks (FNNs): These are the simplest types of artificial neural network. In this network, the information moves in only one direction—forward—from the input layer, through the “hidden” layers, to the output layer. 
  • Convolutional Neural Networks (CNNs): These are primarily used for image processing, pattern recognition, and machine learning. CNNs have their “neurons” arranged more like those of the frontal lobe, the responsible area for higher-level thinking. 
  • Recurrent Neural Networks (RNNs): These networks are used for applications that require sequential data, like handwriting or speech recognition. In these networks, data can flow in any direction. 
  • Generative Adversarial Networks (GANs): These consist of two networks: one that generates the data and another (the ‘adversarial’) that attempts to determine the difference between the generated data and real data. 

Neural networks are used in a variety of applications. These include but are not limited to:  

  • Image Recognition: Neural networks can identify patterns and differentiate between different visual inputs. 
  • Speech Recognition: These networks can understand spoken language from a variety of sources, convert it to text, or follow voice commands. 
  • Natural Language Processing (NLP): Neural networks play a crucial role in translation, sentiment analysis, and other language-related tasks. 
  • Predictive Analytics: Neural networks can forecast future events and behaviors, allowing businesses to make proactive, data-driven decisions. 

What is narrow AI?

Narrow AI, also known as weak AI, is AI systems that are designed to carry out a single task, like voice recognition, recommendation systems, or image recognition. They can only perform the specific tasks they are trained for. Examples include Siri, Alexa, and Google Assistant – they operate within a limited pre-defined range of functions. 

What is general AI?

General AI, also known as strong AI, is a type of AI that can understand, learn, and apply its intelligence to any intellectual task that a human being can do. It’s more complex and sophisticated than narrow AI. General AI can make decisions, solve problems, plan, and learn from experience. However, as of now, general AI is a theoretical concept and doesn’t exist in practice. 

What is super AI?

Super AI is a hypothetical concept representing the point where AI surpasses human intelligence and can perform tasks that are currently impossible for human minds to undertake. This level of AI would not only understand and replicate human intelligence and decision-making but also be capable of exceeding it significantly. While a topic of much speculation and debate, super AI is still purely theoretical at this stage. 

What Capabilities Does AI Offer?

AI has been used across sensory (such as audio, video, and touch), cognitive (like language and emotions), and physical (movement, action, events) domains. Some examples of application include: 

Image and Video Analysis

The application of AI to tasks related to image and video processing and comprehension is known as Computer Vision. This includes classification (identifying the entities represented), captioning (describing content in natural language), processing (enhancing resolution, applying filters, etc.), and generation (creating images from captions or other prompts, including other images).

Human Speech or Writing

AI’s role in helping computer systems understand and use human speech or writing falls under Natural Language Processing (NLP). Given the ubiquity of human language across knowledge domains (including image processing), NLP plays a vital role in various fields like medicine, finance, journalism, marketing, etc. NLP algorithms can classify documents, locate pertinent information, extract information from text and structure it in databases and knowledge graphs, answer queries, detect fake news, generate texts from tables, create image and video captions, convert texts to speech or vice versa, and more.

Sound Processing and Generation

The branch of AI dedicated to understanding, processing, and generating sounds, whether speech-related, noise, or music, is known as Audio Signal Processing (ASP). In recent years, the quality of speech processors and generators has improved significantly, making interactions with conversational agents like Siri and Alexa commonplace. ASP has also been used in disease screening, such as COVID-19, from cough recordings, and in the automatic generation of music.

Sensor and Signal Detection

This area focuses on processing data from sensors in machines (like car sensors monitoring temperature or tire pressure) or humans (wearables or medical devices) to make useful predictions about the entity’s status, including health-related risks. This field is expanding due to the widespread use of sensors and IoT devices, and improved networking infrastructures that allow multiple devices to interact in real time.


This domain aids in the creation and operation of robots, encompassing all their functions (perception, motion, interaction with the environment, interaction with humans, etc.). Robots can be seen as agents in an environment that needs to be perceived and analyzed to perform possible actions (for example, an automatic mop must identify edges and obstacles and navigate around them, without missing parts of the floor). Decisions made by robots can be based on models trained by supervised, weakly-supervised, or reinforcement learning methods. AI can assist the decisions of self-driving cars (when to accelerate, brake, turn) across all levels of automation (assistance, partial/conditional/full automation). While AI algorithms may be applied to robots, robotics and AI are not necessarily interdependent technologies.

What is Generative AI?

Generative AI is a branch of artificial intelligence that focuses on systems capable of creating content. It’s like giving an AI a virtual paintbrush and canvas, and then it produces its own unique artwork. But it’s not just limited to visual art; generative AI can create music, write text, design websites, and even generate synthetic voices. 

At the core of generative AI are algorithms known as generative models. These models learn the patterns, structures, and features in the data they’re trained on, and they use this knowledge to generate new data that’s similar. For instance, if you train a generative model on a dataset of classical music, it could compose its own piece of music in a classical style. 

One of the most popular types of generative models is called a Generative Adversarial Network (GAN). A GAN consists of two parts: a generator, which creates the new data, and a discriminator, which tries to distinguish between the real data and the data created by the generator. The two parts of the GAN compete against each other, leading the generator to produce increasingly realistic data. 

For business leaders, the potential applications of generative AI are vast. It could be used to create original content for marketing campaigns, design new products, simulate business data for testing new strategies, and much more. It’s a rapidly evolving field with a lot of potential, but it’s also important to be aware of the ethical considerations, such as the potential for generating deepfakes or other misleading content. 

Enhancing Productivity through Automation and Generative AI

Recent studies have highlighted the potential of automation and reskilling to stimulate productivity growth in the United States. Automation could supply a much-needed boost to stagnant productivity rates while also addressing labor shortages. 

According to McKinsey, Generative AI could potentially raise US labor productivity by 0.5 to 0.9 percentage points annually through 2030, according to a moderate adoption scenario. This range considers whether the time saved by automation is redeployed at productivity levels of 2022 or 2030, taking into account the anticipated occupational mix in 2030. 

When combined with other automation technologies, the potential growth could be even more substantial. All forms of automation could potentially propel US productivity growth to an annual rate of 3 to 4 percent in a moderate adoption scenario. However, this would necessitate significant efforts from stakeholders in both the public and private sectors. Workers will need assistance in acquiring new skills, and the risks associated with generative AI must be effectively managed and mitigated. If transitions for workers and risks are well-handled, generative AI could significantly contribute to economic growth. 

To fully leverage the benefits of generative AI in enhancing productivity in knowledge work, employers, policymakers, and broader ecosystems need to establish clear guidelines and safeguards. Workers should view these tools not as threats to their jobs, but as enhancers of their work. As machines take over mundane or unpleasant tasks, employees can focus on more engaging work that requires creativity, problem-solving, and collaboration. Workers will need to become adept at using these tools and importantly, use the time saved to concentrate on higher-value activities. For instance, managers could automate more administrative and reporting tasks, freeing up time for strategic thinking and coaching. Similarly, researchers could speed up projects by using automation tools to sort and synthesize large data sets. 

Modalities of Generative AI.

Generative AI can be used in various modalities including text, image, voice, and video. In text, it can generate human-like stories or articles. In images, it can create completely new images or alter existing ones. In voice, it can mimic human voices or create new ones. In video, it can generate new scenes or alter existing footage. 

What is a Large Language Model (LLM)?

A large language model is a type of artificial intelligence model that has been trained on a vast amount of text data. These models are designed to generate human-like text based on the input they receive. They can answer questions, author essays, summarize texts, translate languages, and even generate poetry. They can also be fine-tuned for specific tasks, such as medical diagnosis or legal analysis. 

One of the most well-known large language models is GPT-3, developed by OpenAI. It has 175 billion parameters and was trained on hundreds of gigabytes of text. These models use a type of neural network architecture called a transformer, which allows them to handle long-range dependencies in text, making them particularly effective for many natural language processing tasks. 

Large language models can be a powerful tool. They can be used to automate customer service, generate content, supply personalized recommendations, and much more. However, it’s important to note that while these models can generate impressively fluent text, they don’t truly understand the text in the way humans do. They’re essentially sophisticated pattern-matching tools, and their output is entirely dependent on the data they were trained on. 

BERT, GPT, and the History of LLMs.

The history of large language models (LLMs) involves several key models. One of the early LLMs was BERT (Bidirectional Encoder Representations from Transformers), which was developed by Google. BERT uses a transformer architecture to understand context in both directions (left-to-right and right-to-left) of a given word. 

Following BERT, OpenAI developed the GPT (Generative Pretrained Transformer) series, including GPT-1, GPT-2, and GPT-3. Unlike BERT, GPT models are generative and use a unidirectional (left-to-right) transformer. GPT-3, the latest in the series, is the largest publicly available LLM, with 175 billion parameters. 

Architecture and Linguistic Foundations of LLMs.

LLMs like BERT and GPT rely on transformer architectures. These models use a mechanism called attention, which allows them to focus on distinct parts of the input sequence when generating an output. This allows them to handle long-range dependencies in text, making them well-suited for many natural language processing tasks. 

The linguistic foundations of LLMs come from the vast amount of text data they are trained on. They learn the statistical patterns of the language, including syntax (how words are arranged in sentences) and semantics (the meaning of words and sentences). However, they don’t truly understand the text in the way humans do – they merely generate text based on the patterns they’ve learned. 


Tokenization is the process of breaking down text into smaller pieces called tokens. These tokens could be as small as individual characters, or as large as words or sentences, depending on the granularity of the model. For instance, a word-level tokenizer would take the sentence “I love ice cream” and break it down into four tokens: [“I”, “love”, “ice”, “cream”]. In LLMs, tokenization plays a key role in preparing the text data for the model. 


The output of an LLM is a sequence of tokens, which can be converted back into human-readable text. For instance, given the prompt “Once upon a time”, an LLM might generate the additional tokens “there was a king who lived in a castle”, resulting in the complete sentence “Once upon a time, there was a king who lived in a castle.” 

Context window.

The context window, or the amount of text an LLM can consider at one time, is determined by the model’s architecture. For instance, GPT-3 has a context window of 2048 tokens. This means that when generating a response, GPT-3 can only consider the previous 2048 tokens. If the input text is longer than this, the model will not be able to consider the entire text. The recently released, GPT-4 has two models that can process 8,192 and 32,768 tokens respectively. 

How are LLMs trained?

Training an LLM involves providing it with a large dataset of text, and having it predict the next token in a sequence given the previous tokens. Over time, and with enough data, the model learns the statistical patterns of the language, such as common word sequences, grammatical rules, and even some semantic information. 


The dataset for training an LLM is usually a large corpus of text. For instance, GPT-3 was trained on a diverse range of internet text. However, the specifics of the dataset are proprietary information. The more diverse and extensive the dataset, the more capable the model is of understanding and generating a wide range of text. 

Application and Use of LLMs.

Large language models have a variety of applications. They can be used to generate human-like text for chatbots, personal assistants, content creation, and more. They can also be used for tasks like translation, summarization, and sentiment analysis. Moreover, they can serve as a tool for exploring linguistic and societal trends, as they capture the statistical patterns of the language and content they were trained on. 

super computing

Generative AI and Image Generation.

Generative AI has seen remarkable success in the field of image generation. By training on large datasets of images, generative models can produce new images that are indistinguishable from real ones. These models have a wide range of applications, from art and design, to advertising, to entertainment and beyond. 

Text to Image Generation.

Text to image generation is a challenging task that involves creating a detailed and coherent image from a textual description. For example, given the text “a two-story yellow house with a red roof and a big backyard”, a text-to-image model would generate an image that matches this description. 

Dall-E, Midjourney, Stable Diffusion, and other Image Generating AI.

There have been several notable models in the field of image generation. Dall-E, developed by OpenAI, is a model that can generate images from textual descriptions using the capabilities of Generative Adversarial Networks (GANs). Educated on an extensive image dataset, this AI model can create visuals from diverse descriptions and cues. Utilizing a combination of unsupervised and reinforcement learning, DALL-E typically produces images that are closer to photorealism. 

Midjourney is another model that focuses on producing high-quality image-to-image translation. Leveraging image processing techniques, it can reimagine visuals by altering hues and incorporating artistic styles.  

Stable Diffusion is another generative model that can create high-quality images, that diverges from conventional techniques. Differing from the standard models, it uses the “diffusion” principle to craft images. Starting with a simple noise image, it incrementally refines this the image across several stages until it outputs a clear image.

Commercial Use of Image Generation.

Image generation AI has opened a plethora of opportunities for commercial use. In the world of design and marketing, it’s being used to create unique and compelling visual content for ads, website designs, and social media campaigns. Companies can now use AI to generate images that are perfectly tailored to their brand aesthetics, bypassing the need for costly photoshoots or graphic design resources. In addition, real estate, architecture, and interior design industries are employing these technologies to create realistic 3D renderings and virtual home tours based on textual descriptions or low-quality images. 

Intellectual Property and Legal Status of AI.

As AI technology progresses, it poses unique and complex questions around intellectual property (IP) rights. Who owns the copyright to the content generated by AI? Is it the creator of the AI, the user, or the AI itself? If content generated by AI was trained using copyrighted material, can that be considered a violation of the original owner’s IP?  

As of now, legal frameworks vary across countries, and many are yet to catch up with these technological advancements. In most jurisdictions, AI-generated work does not qualify for copyright protection as it’s not created by a human author. However, the legal landscape is rapidly changing, and new laws and regulations are expected to evolve as we continue to grapple with these issues. 

These issues are making headlines in US courts. For instance, in 2022, a case named Andersen vs. Stability AI, saw three artists taking legal action against several AI platforms. Their claim? These AIs were using their unique artistic styles without permission, letting users make art that looked similar to their original pieces. If the court sides with the artists, the AI companies might face hefty fines. 

The software industry itself is responding to concerns over intellectual property rights and AI. Glaze, a product in early development by a University of Chicago research team, attempts to embed a human-invisible noise layer that prevents image generation tools like Stable Diffusion from being able to read the image. 

These legal and practical issues will be influx for some time as the courts work out precedent and interpretation, new laws and regulations are passed, and new technologies and approaches rapidly evolve. 

Advantages and Disadvantages of Artificial Intelligence.

Artificial Intelligence brings a host of benefits: it increases efficiency, enables unprecedented scale, supplies personalization, and facilitates the discovery of patterns and insights that are difficult for humans to discern. However, it’s not without its downsides. Ethical and privacy issues abound, as do concerns about job displacement due to automation. Moreover, the decisions made by AI systems are not always transparent, leading to potential bias and fairness issues. 

Legal Standing of AI.

The legal standing of AI is an area of ongoing debate and uncertainty. Can an AI system be held liable for its actions? Who is responsible when an AI makes a mistake? These questions are becoming increasingly pertinent as AI systems become more autonomous and integrated into our lives. Currently, AI systems are treated as tools or property under the law, but as AI continues to evolve, so will the legal frameworks that govern them. 

AI and Data Privacy.

Data privacy is a significant concern in the age of AI. AI systems, particularly machine learning models, require vast amounts of data to function effectively. This data often includes sensitive personal information. While this data is used to train models to be more effective and deliver personalized experiences, it also raises serious privacy concerns. Regulations like the EU’s General Data Protection Regulation (GDPR) are aimed at protecting individuals’ privacy, but the global nature of the internet and differences in regional regulations present ongoing challenges. 

The AI Stack.

From a business and investment perspective, the AI stack can be seen as a hierarchy of technologies and capabilities that create value and contribute to AI-driven products and services. The layers can be understood as distinct stages of the AI value chain, each presenting its own opportunities and challenges for businesses and investors. Here are the AI stack layers from that perspective: 

  • Data Layer: The foundation of any successful AI project is high-quality and diverse data. Companies that can collect, organize, and leverage large datasets have a competitive advantage. Investing in data acquisition, data management, and data security is crucial for AI-driven businesses. 
  • Data Processing Layer: Once data is collected, preprocessing and feature engineering become essential to extract valuable insights. Businesses investing in efficient data processing methods can improve the performance of their AI algorithms and gain more accurate predictions. 
  • Machine Learning (ML) Layer: ML algorithms are at the core of AI systems. Companies investing in research and development of innovative ML algorithms, or leveraging existing state-of-the-art algorithms, can create valuable products or services with better performance and efficiency. 
  • Model Layer: The trained ML models are the intellectual property and knowledge base of an AI-driven business. Developing proprietary models or acquiring licenses for specialized models can give companies a competitive edge. 
  • Inference Layer: The ability to deploy ML models efficiently and at scale is crucial for AI applications. Companies investing in robust inference infrastructure can deliver real-time AI-powered products and services to their customers. 
  • Automation Layer: Businesses can capitalize on AI’s automation capabilities to optimize processes, reduce costs, and improve efficiency. Investing in robotic process automation (RPA) and intelligent automation technologies can yield significant returns. 
  • Cognitive Layer: Investing in innovative natural language processing (NLP), computer vision, and other cognitive capabilities can enable businesses to create more human-like and contextually aware AI systems, leading to enhanced user experiences. 
  • Knowledge Layer: Businesses can focus on developing systems that can reason, learn, and adapt based on acquired knowledge. This layer may involve investing in knowledge representation techniques, ontologies, and knowledge graphs. 
  • Contextual Layer: Understanding context and the broader environment in which AI operates is crucial for delivering personalized and context-aware experiences. Companies can invest in technologies that enable AI to consider situational awareness and tailor responses accordingly. 
  • Application Layer: The ultimate goal of AI investments is to create valuable AI-powered applications and services that cater to specific industries and domains. Businesses can focus on building AI-driven products that solve real-world problems and address market needs. 

From an investment perspective, understanding the different layers of the AI stack can help investors identify opportunities and risks in AI-related startups and companies. Companies that demonstrate strong capabilities in key layers, have a clear strategy for product development, and address ethical considerations around AI usage are more likely to attract investment and succeed in the AI market. Additionally, investors should be aware of the potential challenges related to data privacy, security, and regulatory compliance in the AI space. 

Current AI Marketplace.

The AI marketplace is booming, with new startups appearing constantly and major tech companies heavily investing in AI research and development. Applications of AI span a wide range of sectors, from healthcare and education to finance, transportation, and entertainment. 

National data from McKinsey shows the increasing adoption and application of AI across all sectors and functions. 

AI and the Workforce

With the emergence of generative AI technologies, automation has reached new heights. Despite being in its nascent phase, generative AI carries considerable and diverse potential for business applications. It’s capable of coding, product designing, crafting marketing content and plans, optimizing processes, reviewing legal documents, delivering customer service through chatbots, and even propelling scientific innovation. It can function autonomously or incorporate human interaction – the latter being more probable currently due to the technology’s existing stage of development. 

This technological progress indicates that automation is on the verge of influencing a broader array of professional activities, encompassing areas of expertise, human engagement, and creativity. The schedule for adopting automation might be significantly expedited. McKinsey estimates that by 2030 AI and automation could undertake work representing 29.5% of total working hours in the U.S. economy. The biggest impact for knowledge workers that we can state with certainty is that generative AI is likely to significantly change their mix of work activities. 

By 2030, McKinsey estimates a 23 percent increase in the demand for tech-related jobs. Despite recent attention-grabbing tech sector layoffs, this does not change the longer-term demand for tech talent among companies of all sizes and sectors as the economy continues to digitize. Banking, insurance, pharmaceutical, and healthcare sectors, to cite a few examples, are undergoing substantial digital transitions, thus necessitating tech professionals equipped with high-level tech skills 

Start-Up Opportunities and AI

Start-up opportunities related to AI and ML (Machine Learning) are abundant, as these technologies continue to transform various industries and create new possibilities. Here are some promising start-up opportunities in the AI and ML space: 

  • Industry-Specific AI Solutions: Develop AI-powered solutions tailored to specific industries, such as healthcare, finance, agriculture, retail, or transportation. These solutions could optimize processes, improve decision-making, and enhance customer experiences within the targeted sector. 
  • Natural Language Processing (NLP) Applications: Create NLP-based products and services for sentiment analysis, language translation, chatbots, virtual assistants, content summarization, or voice recognition systems. 
  • Computer Vision Solutions: Build computer vision applications for object detection, facial recognition, image analysis, autonomous vehicles, quality control, or surveillance systems. 
  • Recommendation Systems: Develop AI-driven recommendation engines for personalized product recommendations, content suggestions, or travel and entertainment planning. 
  • Anomaly Detection and Predictive Maintenance: Offer AI-powered solutions to detect anomalies in industrial processes, predict equipment failures, and enable proactive maintenance, thus reducing downtime and operational costs. 
  • AI-Powered Cybersecurity: Create ML-based cybersecurity solutions that can detect and prevent cyber threats, including malware, phishing attacks, and data breaches. 
  • AI for Drug Discovery: Use ML algorithms to accelerate drug discovery processes, identify potential drug candidates, and optimize molecular structures. 
  • AI in Education: Develop AI-powered tools for personalized learning, intelligent tutoring systems, automated grading, or educational content recommendation. 
  • AI-Driven E-commerce Solutions: Provide AI-based e-commerce platforms that optimize product recommendations, pricing strategies, and customer support, leading to increased conversions and customer satisfaction. 
  • AI in Agriculture: Build AI solutions for precision agriculture, crop monitoring, automated irrigation, and pest detection to improve crop yield and resource efficiency. 
  • AI in Financial Services: Create AI applications for fraud detection, risk assessment, algorithmic trading, personalized financial advice, or chatbot-based customer support in the financial sector. 
  • AI-Powered Remote Sensing: Use ML to analyze satellite and aerial data for environmental monitoring, disaster response, urban planning, and resource management. 
  • AI for Healthcare: Develop AI solutions for medical image analysis, patient diagnosis, disease prediction, drug discovery, or personalized medicine. 
  • AI-Enabled IoT (Internet of Things) Devices: Integrate AI capabilities into IoT devices, enabling them to make intelligent decisions and respond to user needs more effectively. 
  • AI in Human Resources: Offer AI-driven HR tools for candidate screening, employee performance analysis, workforce planning, and employee engagement. 

When exploring these opportunities, it’s crucial for start-ups to focus on solving real problems, understanding customer needs, and ensuring ethical and responsible AI use. Additionally, building scalable and robust AI models, acquiring high-quality data, and staying updated with the latest advancements in AI and ML technologies are essential for success in this competitive landscape. 

Use Case Examples for AI


  • Disease Diagnosis: AI algorithms can analyze imaging data and other medical records to help doctors diagnose diseases more accurately and quickly. 
  • Personalized Medicine: AI can help determine the most effective treatments based on an individual’s genetic makeup and lifestyle factors. 


  • Precision Farming: AI can analyze soil and crop data to recommend optimal planting times, pest control, and water management strategies. 
  • Automated Harvesting: AI-driven robots can perform tasks like fruit picking, weeding, and planting. 


  • Fraud Detection: AI can identify patterns that might indicate fraudulent activities and raise alerts in real-time. 
  • Algorithmic Trading: AI can analyze vast amounts of market data to make informed trade decisions. 


  • Personalized Recommendations: AI can analyze customer behavior and preferences to suggest products they might be interested in. 
  • Inventory Management: AI can forecast demand for various products and optimize inventory accordingly. 

Transportation and Logistics: 

  • Route Optimization: AI can calculate the most efficient routes for delivery trucks based on real-time traffic data. 
  • Autonomous Vehicles: AI is at the heart of self-driving car technology. 


  • Predictive Maintenance: AI can analyze machine data to predict when maintenance will be needed, reducing downtime. 
  • Quality Control: AI systems can spot defects or problems in products faster and more accurately than human inspection. 

Customer Service: 

  • Chatbots: AI chatbots can handle routine customer inquiries, freeing up human agents for more complex tasks.
  • Sentiment Analysis: AI can analyze customer feedback to identify trends and improve service. 


  • Adaptive Learning: AI can tailor educational content to each student’s needs, improving learning outcomes. 
  • Plagiarism Detection: AI can scan student assignments and papers to detect any instances of plagiarism. 

Indiana companies investing in AI

Several Indiana-based companies are making substantial investments in AI technology.  

  • Eli Lilly and Company: This pharmaceutical company based in Indianapolis, Indiana, has been investing in AI and machine learning to streamline drug discovery and development processes.
  • Cummins: Based in Columbus, Indiana, this manufacturer of engines, filtration, and power generation products has been utilizing AI for predictive maintenance, quality control, and streamlining operations.
  • Anthem, Inc: This Indianapolis-based health insurance provider has been leveraging AI to enhance customer service, optimize health care plans, and predict patient outcomes.
  • Salesforce: Even though it’s primarily San Francisco-based, Salesforce has a significant presence in Indianapolis after the acquisition of ExactTarget. Salesforce is a leader in AI, with its AI-powered analytics tool, Einstein. 
  • Genesys: Based in Indianapolis, Genesys offers a wide range of customer experience tools, many of which utilize AI and machine learning to enhance customer interactions, route calls, and predict customer behavior.
  • hc1: A healthcare-focused tech company based in Indianapolis, hc1 uses AI to personalize healthcare and improve lab testing.
  • LifeOmic uses AI to provide personalized healthcare solutions. Even companies outside the tech sector, such as Cummins, are investing in AI to optimize their operations and drive innovation. 

Venture Support.

Are you an Indiana-based AI startup navigating the fundraising process? 

Connect with our team of trusted experts to gain valuable feedback and move closer to your own venture success.