What is intelligence?
Methods and goals in AI
Strong AI, applied AI, and cognitive simulation
Alan Turing and the beginning of AI
Early milestones in AI
The first AI programs
Evolutionary computing
AI programming languages
- Python: Python is a versatile programming language with a large user base in the AI industry. It has a sizable developer community and a variety of modules and frameworks that make implementing AI algorithms simple. TensorFlow, Keras, and scikit-learn are a few well-liked Python AI libraries.
- One of the first programming languages, LISP is renowned for its capacity to deal with symbolic data. It is frequently employed in expert system development and AI research.
- Prolog is a logic programming language that is frequently used in AI for knowledge representation and natural language processing.
- Java: AI is only one of the many programmes that employ the well-liked Java programming language. It has a sizable developer community and a broad selection of packages that make implementing AI algorithms simple.
- R: R is an environment and programming language for statistical computation and graphics. It is frequently utilised in the field of AI for machine learning and data analysis.
- C++: For creating intricate systems and apps for AI, C++ is frequently employed since it is a high-performance programming language.
- Julia: Julia is an open-source, high-performance programming language for technical computing. Users of other technical computing environments will be familiar with Julia's syntax.
- MATLAB: For research and development purposes, MATLAB is a popular numerical computing environment and programming language in the field of artificial intelligence.
Microworld programs
- The expert system's knowledge base consists of information about its subject matter, including laws and facts. Typically, a formal language like Prolog or LISP is used to write it.
- The element that applies the knowledge base's rules to the present issue and draws fresh conclusions is the inference engine. There are several deductive methods employed, including case-based reasoning and forward and backward chaining.
- The user interface of the expert system is the element that enables human participation and communicates the problem that has to be solved.
- Deductive reasoning: drawing a conclusion based on logical deduction from known premises
- Inductive reasoning: drawing a conclusion based on a pattern in the data
- Abductive reasoning: drawing a conclusion based on the best explanation for the observed data
- Supervised Learning : learning from labeled data
- Unsupervised Learning : learning from unlabeled data
MYCIN
Work on MYCIN, an expert system for treating blood infections, began at Stanford University in 1972. MYCIN would attempt to diagnose patients based on reported symptoms and medical test results. The program could request further information concerning the patient, as well as suggest additional laboratory tests, to arrive at a probable diagnosis, after which it would recommend a course of treatment. If requested, MYCIN would explain the reasoning that led to its diagnosis and recommendation. Using about 500 production rules, MYCIN operated at roughly the same level of competence as human specialists in blood infections and rather better than general practitioners.
Nevertheless, expert systems have no common sense or understanding of the limits of their expertise. For instance, if MYCIN were told that a patient who had received a gunshot wound was bleeding to death, the program would attempt to diagnose a bacterial cause for the patient’s symptoms. Expert systems can also act on absurd clerical errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age data were accidentally transposed.
The CYC project
- Defining the problem and selecting the appropriate type of ANN. For example, supervised learning problems like image classification or language translation are typically solved using feedforward neural networks, while unsupervised learning problems like anomaly detection or clustering are typically solved using autoencoders or recurrent neural networks.
- Preparing the data. This includes splitting the data into training, validation, and test sets, and preprocessing the data to make it suitable for the ANN.
- Designing the architecture of the network. This includes selecting the number of layers, the number of neurons in each layer, and the activation functions to be used.
- Training the network. This involves feeding the data through the network, adjusting the weights of the neurons based on the errors, and repeating this process until the network reaches a satisfactory level of accuracy.
- Fine-tuning the network. This includes adjusting the hyperparameters like the learning rate, batch size, and number of training iterations to optimize the performance of the network.
- Evaluating the network. This includes testing the network on new data and measuring its performance using metrics like accuracy, precision, and recall.
- Deploying the network. This includes exporting the trained network and integrating it into an application or system.
- Feedforward neural networks: These are the most basic type of neural network and are used for simple tasks such as image classification and speech recognition. They consist of layers of neurons that are connected to each other in a directed graph. The input is passed through the layers and processed by each neuron, eventually producing an output.
- Convolutional neural networks (CNNs): These are a type of feedforward neural network that are specifically designed for image and video processing. They use convolutional layers which are designed to extract features from images, making them useful for tasks such as object detection and image segmentation.
- Recurrent neural networks (RNNs): These are neural networks that are designed to process sequential data such as time series or natural language. They have feedback connections which allow them to maintain a hidden state, allowing them to process input sequences of varying lengths.
- Generative Adversarial Networks (GANs): These are a type of neural network that consist of two parts: a generator network and a discriminator network. The generator network is trained to produce new data that is similar to a given dataset, while the discriminator network is trained to identify whether a given input is real or generated. GANs are used for tasks such as image synthesis and text-to-speech.
- Autoencoder: Autoencoders are neural networks that are trained to learn a compressed representation of the input data. It consist of two parts: an encoder and a decoder, the encoder is trained to map the input to a lower dimensional representation and the decoder is trained to map the lower dimensional representation back to the original input. Autoencoders are useful for tasks such as dimensionality reduction and anomaly detection.
- Self-organizing maps (SOMs): SOMs are a type of neural network that is used for unsupervised learning. They consist of a two-dimensional grid of neurons that are trained to organize themselves such that similar inputs are mapped to nearby neurons. SOMs are useful for tasks such as data visualization and clustering.
- Hopfield networks: These are a type of recurrent neural network that are designed to store and retrieve patterns. They consist of a single layer of neurons that are fully connected to each other. They have the ability to settle into a state that is a stable state of the network, known as an attractor state.
- Boltzmann machines (BMs): These are a type of neural network that are used for unsupervised learning. They consist of a layer of visible neurons and a layer of hidden neurons. They use a probabilistic approach to learn the underlying probability distribution of the input data. BMs are useful for tasks such as density estimation and feature learning.
No comments:
Post a Comment