Here are three AI systems identified by different group members, along with their PEAS characterization and a brief description:
AI System: Siri (Apple's virtual assistant) PEAS Characterization:
Description: Siri is a virtual assistant developed by Apple that uses natural language processing and machine learning techniques to interact with users. It helps users perform various tasks, such as making calls, sending messages, setting reminders, and answering questions. Siri's performance is measured based on its ability to accurately understand and respond to user queries, providing relevant and helpful information. It operates in the environment of mobile devices, relying on sensors like a microphone for voice input and a touch screen for user interaction. The system's actuators include text-to-speech synthesis for generating voice responses and a display interface for visual information. Siri aims to provide a convenient and efficient user experience through voice-based interactions.
AI System: Roomba (autonomous vacuum cleaner) PEAS Characterization:
Description: Roomba is an autonomous vacuum cleaner designed to clean indoor spaces. It uses a combination of sensors and algorithms to navigate the environment and perform cleaning tasks. The performance of Roomba is measured based on its efficiency in cleaning the given area, ensuring coverage of the floor, and avoiding obstacles like furniture and walls. The system's actuators include movement motors to navigate and reach different areas, a suction mechanism to collect dirt and debris, and brushes for thorough cleaning. Roomba relies on various sensors such as bump sensors to detect physical obstacles, cliff sensors to avoid falling off edges, dirt sensors to identify dirty areas, wall sensors to follow walls, and floor sensors to adjust cleaning settings based on different floor types. Roomba aims to provide automated and efficient cleaning solutions for households.
AI System: AlphaGo (AI-based board game player) PEAS Characterization:
Description: AlphaGo is an AI system developed by DeepMind that became known for its mastery of the game of Go. AlphaGo utilizes deep neural networks and reinforcement learning techniques to analyze and play the game strategically. Its performance is measured based on its winning rate against human opponents, move quality, and strategic decision-making capabilities. The environment for AlphaGo is the game board of Go, with defined rules and opponent moves. The system's actuator involves placing game pieces (stones) on the board based on chosen moves. Sensors include the representation of the game board state, opponent moves, and the rules of the game. AlphaGo aims to demonstrate advanced AI capabilities by achieving exceptional performance in complex board games.
AI System: Tesla Autopilot (self-driving car system)
PEAS Characterization:
Description: Tesla Autopilot is an AI-based system designed for semi-autonomous driving in Tesla vehicles. It utilizes a combination of sensors, machine learning algorithms, and advanced computer vision techniques to perceive and analyze the surrounding environment. The performance of Tesla Autopilot is measured based on its ability to ensure safe and efficient driving, including adherence to traffic rules and responsiveness to changing road conditions. The system's actuators control various aspects of the vehicle, such as steering, acceleration, braking, and signaling. Tesla Autopilot relies on sensors like cameras, radar, lidar, GPS, and other sensors to detect and monitor the road, traffic, pedestrians, and other vehicles. Its goal is to assist drivers by providing advanced driver assistance features and promoting enhanced safety on the road.
PEAS Characterization:
Description: The Netflix Recommendation System is an AI-based system that analyzes user data and content metadata to provide personalized movie and TV show recommendations to its subscribers. It employs machine learning algorithms, collaborative filtering, and content-based filtering techniques to understand user preferences and suggest relevant content. The performance of the system is measured based on user satisfaction, the quality of personalized recommendations, viewer engagement, and the relevance of recommended content. The environment of the Netflix Recommendation System is the online streaming platform itself, which includes user profiles, their viewing history, and the available content library. The system's actuators involve generating content recommendations, curating personalized content lists, and presenting user interface elements that facilitate content discovery. Sensors include user preferences captured through ratings, viewing history, and interactions with the platform, as well as content metadata such as genre, cast, and ratings. The goal of the system is to enhance the user experience by providing tailored recommendations and promoting content discovery based on individual preferences.
An Autocorrection tool is an AI-based system designed to assist users in automatically correcting spelling or typing errors in their text. This tool utilizes advanced natural language processing (NLP) techniques to analyze the input text and identify potential errors or inconsistencies. It compares the words or phrases against a dictionary or language model to determine if they match known correct words or phrases. When an error is detected, the Autocorrection tool suggests a corrected version, replacing the erroneous text with the most likely intended word or phrase. The system may also consider contextual information, such as nearby words or grammar rules, to enhance the accuracy of the autocorrection suggestions. Autocorrection tools are commonly used in word processors, text messaging applications, and other text input interfaces to improve the speed and accuracy of text composition.
Creating an AI system that can play Connect Four is the project idea for the Fundamentals of Artificial Intelligence course. To decide how to win the game strategically, this AI system will employ a variety of AI approaches like search algorithms, heuristics, and machine learning. Implementing numerous search algorithms, such as Minimax, Alpha-Beta Pruning, and Monte Carlo Tree Search, as well as investigating machine learning strategies, such as deep learning or reinforcement learning, are all part of the project.
What this does is identify and filer spam messages. It will save the user from the work of separating relevant messages from unwanted ones, which in addition to holding a lot of space, can also contain malicious content. This system will use a variety of AI concepts such as machine learning and natural language processing mechanisms. These techniques will enable it to identify spam messages by going through the contents and analyzing them. It will also be able to identify addresses that frequently send spam messages and block them.
This system will be designed to recommend music to users based on their preferences. It will analyze user data and music metadata to provide personalized recommendations. It will use a variety of AI concepts such as machine learning and collaborative filtering. These techniques will enable it to understand user preferences and suggest relevant music. It will also be able to learn from user interactions and improve its performance over time.
This system will be designed to interact with users and provide them with information or assistance. It will be able to understand user queries and respond appropriately. It will use a variety of AI concepts such as natural language processing and machine learning. These techniques will enable it to understand user queries and provide relevant responses. It will also be able to learn from user interactions and improve its performance over time.
Usage:
python runner.py problem <problem_name> algorithm <algorithm_name> [OPTIONS]
OPTIONAL COMMANDS
--file <file_path>
Specify the file path for the problem (default: knapsackdata.txt for knapsack, tspdata.txt for tsp)
--num-of-items <number_of_items>
Specify the number of items (default: 10 for knapsack, 16 for tsp)
--experiment
Display a graph of the runtime comparision
--all
Run all algorithms at once
--verbose
Enable verbose mode for detailed live output
--num-of-generations <num_generations>
Set the number of generations for genetic algorithm(default: 2000 generations)
--population-size <population_size>
Set the population size for genetic algorithm (default: 100)
--mutation-rate <mutation_rate>
Set the mutation rate for genetic algorithm (default: 0.1)
--crossover-rate <crossover_rate>
Set the crossover rate for genetic algorithm (default: 0.8)
--temperature <initial_temperature>
Set the initial temperature for simulated annealing (default: 100.0)
--cooling-rate <cooling_rate>
Set the cooling rate for simulated annealing (default: 0.95)
--help
See the usage of the algorithm
To try out and see these findings run the following commands
Runtime and Fitness
Code (runner.py): To Run All At Once
script
python runner.py problem knapsack --experiment --all
Code (runner.py): Hillclimbing Algorithm
script
python runner.py problem knapsack algorithm hc --experiment
Code (runner.py): Simulated Annealing
script
python runner.py problem knapsack algorithm sa --experiment
Code (runner.py): Genetic Algorithm
script
python runner.py problem knapsack algorithm ga --experiment
In this report, we compare the performance of three algorithms (Genetic Algorithm, Hill Climbing, and Simulated Annealing) for solving the Knapsack problem. The ones values are gained by averaging 5 test runs to make their value as representative as possible. The algorithms were evaluated on datasets with 10, 15, and 20 items.
Benchmark of Speed:
The execution time of each algorithm was measured using the time it module in Python. The results are presented in the table below:
Algorithm | 10 items | 15 items | 20 items |
---|---|---|---|
Hill Climbing | 26.92ms | 35.25ms | 40.40ms |
Simulated Annealing | 5699.3ms | 5637.65ms | 5617.57ms |
Genetic Algorithm | 11008.01ms | 11210.96ms | 18182.42ms |
Performance Comparison:
The performance of the algorithms was evaluated based on the objective of maximizing the total value while respecting the weight constraint of the knapsack. The values in the table below reprersent the total value. The results are presented in the table below:
Algorithm | $ Total Value (10 items) | $ Total Value (15 items) | $ Total Value (20 items) |
---|---|---|---|
Hill Climbing | 23398.0 | 25076.0 | 24666.0 |
Simulated Annealing | 24698.0 | 24768.0 | 25034.0 |
Genetic Algorithm | 15552.0 | 25780.0 | 16288.0 |
Figure: Hillclimbing finding for knapsack
Figure: Simulated Annealing Finding for knapsack
Figure: Genetic Algorithm finding for knapsack
Observations:
The runtime experiment results indicate that the "Hill Climbing" algorithm consistently exhibits the fastest execution time across all problem sizes, outperforming both "Simulated Annealing" and the "Genetic Algorithm."
In terms of objective performance, there is variation among the algorithms depending on the dataset size. For the 10 items dataset, both "Simulated Annealing" and "Hill Climbing" achieve similar total values, while the "Genetic Algorithm" lags behind with a lower total value.
Interestingly, for the 15 items dataset, the "Genetic Algorithm" outperforms the other algorithms significantly, obtaining the highest total value. This suggests that the genetic algorithm's ability to explore a larger search space and leverage genetic operators is beneficial for solving this particular problem size.
In contrast, for the 20 items dataset, "Simulated Annealing" emerges as the top-performing algorithm in terms of objective performance, closely followed by "Hill Climbing." The "Genetic Algorithm" lags behind with a lower total value, possibly indicating that its performance declines when dealing with larger problem sizes.
It is worth noting that the differences in execution time between the algorithms are substantial, especially for the "Simulated Annealing" and "Genetic Algorithm." These algorithms exhibit significantly longer runtimes compared to "Hill Climbing."
The choice of the most suitable algorithm depends on the specific requirements and trade-offs. If runtime is a critical factor and the difference in objective performance is acceptable, "Hill Climbing" can be a suitable choice due to its consistently fast execution time.
However, if the primary goal is to maximize the total value within the knapsack's weight constraint, the choice of algorithm depends on the dataset size. "Simulated Annealing" performs better for smaller datasets (10 and 20 items), while the "Genetic Algorithm" shows superior performance for the 15 items dataset.
It is worth considering that these observations are based on averaged results from five test runs, which increases the reliability of the reported values and strengthens the representativeness of the findings.
The observed trade-offs between runtime and objective performance highlight the need for careful algorithm selection based on the specific problem size and priorities.
It would be valuable to further investigate the performance of these algorithms on larger problem sizes and compare their scalability to gain deeper insights into their behavior and efficiency in handling increasingly complex knapsack instances.
To try out and see these findings run the following commands
Runtime and Fitness
Code (runner.py): To Run All At Once
script
python runner.py problem tsp --experiment -all --verbose
Code (runner.py): Hillclimbing Algorithm
script
python runner.py problem tsp algorithm hc --experiment --verbose
Code (runner.py): Simulated Annealing
script
python runner.py problem tsp algorithm sa --experiment --verbose
Code (runner.py): Genetic Algorithm
script
python runner.py problem tsp algorithm ga --experiment --verbose
In this report, we compare the performance of three algorithms (Genetic Algorithm, Hill Climbing, and Simulated Annealing) for solving the tsp problem for the romanian city. The algorithms were evaluated on datasets with 8, 16, and 20 items (cities).
Benchmark of Speed:
Algorithm | 10 items | 15 items | 20 items |
---|---|---|---|
Hill Climbing | 2.51ms | 4.50ms | 10.88ms |
Simulated Annealing | 1093.0ms | 1778.11ms | 2699.80ms |
Genetic Algorithm | 3330.67ms | 4458.35ms | 5656.52ms |
Performance Comparison:
Algorithm | Total Value (10 items) | Total Value (15 items) | Total Value (20 items) |
---|---|---|---|
Hill Climbing | 2311.40 | 2602.80 | 3457.60 |
Simulated Annealing | 2172.60 | 2757.80 | 3894.40 |
Genetic Algorithm | 4021.80 | 5182.60 | 7994.80 |
Figure: Hillclimbing finding
Figure: Simulated Annealing Finding
Figure: Genetic Algorithm finding
Observations:
This TSP
problem was not a typical TSP
problem because there was not edge between every city. So, we had to come up with an approach such that the fitness function punishes paths that revisit a city. To do this what we did was we used a Uniform Cost Search
algorithm to calculate the fitness (distance) of a path. The distance of a path is calculated by summing up the distance between each adjecent cith in the path. This was revisiting a cith and going longer paths will be punished. This has given us decent results.
The "Hill Climbing" algorithm consistently demonstrates the fastest execution time across all problem sizes, outperforming both "Simulated Annealing" and the "Genetic Algorithm."
In terms of objective performance, measured by the total distance of the path, "Hill Climbing" consistently achieves the lowest total distance among the algorithms.
The "Genetic Algorithm" consistently exhibits the highest total distance, indicating suboptimal solutions for the given problem. Plus the time is also high. This could be becase we have set the number of generations to 2000. If we increase the number of generations, we might get better results, but at the cost of even higher runtime.
Also we have believe that if there were no constrints like there is in the romania city problem, the genetic algorithm would have performed better.
Further analysis and experimentation may be necessary to assess the scalability and effectiveness of these algorithms on larger problem sizes.
Note: We don't recommend using --verbose
during --experiment
, experiment mode because the printing of the verbose mode will add to the runtime and lead to longer time that is not representative of the algorithm's efficiency. But you may add it to get better sense of what the algorithms are doing under the hood.