Title: Feature Extraction, Edge Detection and Image Classification (Computer Vision)
Year level: 3
The ability to see, and interpret what we are seeing, is a process that humans can do easily without really thinking about it. This process is much more complex for a computer to do. Students will gain an understanding of how computers are able to use shape recognition to identify shapes and objects. Following this activity, students will learn how to label images into categories and how to train an AI algorithm.
Lesson 1 – feature extraction and edge detection
Part 1: Feature extraction – 40 mins
- Read a book to the class that emphasises features of animals
- What do you do with a tail like this? – Steve Jenkins
- Heads and Tails – John Canty
- After reading the book, the teacher will supply students (who are in groups of 2 or 3) with pictures of the animals from the books. They will work together to find as many ways to group the animals according to their features. (number of legs, wings, gills etc)
- Students will then choose a grouping to present to the rest of the class
- Encourage students to justify their reasons for grouping them this way and discuss patterns, similarities and differences.
- Extension activity: play “guess who” but use animal cards instead of humans and encourage feature based questions.
Part 2: Edge Detection – 30 mins
- Explain to students that Edge detection involves analysing an image to identify the edges that defines the shape of that object.
- As a whole class, show the students a picture of a giraffe. Draw shapes over its body to explain how a computer uses these groupings of shapes to determine that this is a giraffe (rectangle for neck, circle for head, rectangles for legs, oval for body etc) Ask students to copy you as you draw shapes over the giraffe.
- Now ask students to do a similar thing with a picture of a kangaroo. Once students have finished, ask them to come back to the mat and compare their images with each other.
- To reinforce this concept with the students, students will use the application “Quick, draw!” https://quickdraw.withgoogle.com/
- Students will go to the provided website and draw what the computer is asking them to draw. Based on their drawings, the computer will have a guess at what the student is drawing.
- Once they have finished the game, ask students to choose a picture they enjoyed drawing, and click on it to see how the computer arrived at its decision. “What do you notice about the drawings?” “what are the similarities between yours and other people’s drawings?”
Lesson 2: Image Classification 60-70 mins
- Let students explore the use of vision apps to detect objects in the environment. Students can start by watching the following video https://www.youtube.com/watch?v=FZBGjxQeP-A&feature=youtu.be
- Explain that we are going to do this process of describing what an object is only by its features. We will do this with a blindfold. Provide objects for the students to try and recognise, and encourage students to think about shapes they feel, and the features of the object they are looking at.
- As a class, discuss which objects they guessed correctly and which objects they had trouble identifying. What are the similarities and differences with the incorrect guesses?
- Training AI to see using Cognimates.
- Students will now test and AI system by creating a game using cognimates. Cognimates is a visual programming platform and can be found on this website http://cognimates.me/home/
- Obtain an API key and then go to the cognimates website. Select train model and then train vision.
- Fill out the information
- Upload at least 10 images for both categories (total 20). In this instance we are looking at the difference between a cat and a dog.
- Select train model
- Upload a new image of either a cat or a dog and select predict. It will predict whether the image is a cat or dog. If it guesses incorrectly, then you will need to train the computer by uploading more samples of cats and dogs.
- Once students have finished this, get students to try this system again but using different living things such as: shark vs dolphin or giraffe vs elephant.
- How is the AI system sorting and classifying data? (based on the objects features and attributes)
- Why are we using an algorithm? (to provide the computer with instructions that will help it answer the question/solve the problem)
Why is this relevant?
As a result of completing this task, students will have gained a deeper understanding of how computers are able to mimic human sight based on an objects features, as well as how algorithms are able to classify images into categories. This task connects to digital technology concepts as students are sorting and classifying data during the activities, and creating algorithms to solve real world problems. Speaking of real world problems, this task relates to problems that people who are visually impaired face. Technology has been developed that enable blind people to distinguish one object from another in different environments such as in the house, or on the street when they are crossing the road. The same type of technology is also being used in self-driving cars, as they need to be able to rely on their vision to identify objects such as people, cars, street signs and road boundaries, in order to make fast, safe and appropriate decisions.
The summative assessment task students would complete would involve students using the information and skills they have been shown and learnt, to train AI to recognise the following household items, fork, spoon and cup.
Links with the Digital Technologies curriculum area
|Years 3-4||Knowledge and Understanding||Recognise different types of data and explore how the same data can be represented in different ways (ACTDIK008 – Scootle )|
ADD Links with other curriculum areas
|Learning area||Content description|
|Years 3||Science – Biological Sciences||Living things can be grouped on the basis of observable features and can be distinguished from non-living things (ACSSU044 – Scootle )|
You must log in to post a comment.