Hackathons

ZoomedOut

Set up this virtual camera and always look your best for your fellow peers. This intelligent virtual camera will react in real time to what's happening in your meeting so you cry or laugh on time without having to actively pay attention. Use this saved time to make 2021 as productive as possible. This project gave me the opportunity to learn how to create a web application using Flask as well as communicate with external software programs with a websocket.

Python   HTML   CSS

Technologies

OpenCV and Google DialogFlow

OpenCV captures the video feed from your call and analyzes facial emotions of all those in the call, triggering prerecorded reactions to seamlessly play in the virtual camera. DiaglogFlow is also used to notify you if you are mentioned in the call.

Open Broadcaster Software

Controlled via a websocket, this software can play a number of different prerecorded clips and produce a virtual camera to use in video calls.

Github

The HelpingHand

The HelpingHand can be used with a variety of applications, with our simple design the arm can be easily mounted on a wall or a rover. The high speed video camera will allow the user to see the arm and its environment so users can remotely control the robot hand with their own hand providing a more intuitive experience.

C   Python   HTML   CSS

OpenCV

The program tracks a green ball to control the claw for the first prototype. We also used tensorflow for hand gesture tracking for later integration.

Arduino

Servo motors were controlled by the Arduino via a serial connection with the computer reading the hand gesture data

Top 25 Project

Winner at the largest hackathon in north western Canada


Website Github

OrigamiGo

Origami is cool. Machine vision is cool. Together, even cooler! This step by step visual guide will allow you to easily follow and verify your steps. Using machine vision from Microsoft Azure and OpenCV, you'll never struggle to make origami again! After folding each step, the user holds it up to the camera, and if done correctly the AI will tell you the next step.

Python

Microsoft Azure Machine Vision

To recognize each step in the origami process, we used machine vision to determine if the steps were correct or not, before moving to the next step

OpenCV

OpenCV was used to give visual feedback to the user via the webcam, marking folds , showing the next step or showing if the step was properly done or not

Text to Speech

Using text to speech libraries, we also gave audio instructions and feedback to the user

Github

cmd-f My Voice

cmd-f My Voice is a tool to help you easily find keywords in recordings. Students can upload recorded lectures, study sessions, or any audio files of their choice. When you enter a keyword, timestamps of each occurence of the word in the audio is listed so that you can quickly jump to that moment in the file.

JavaScript  HTML   CSS

Google Cloud Speech-to-Text

Used the Speech-to-Text API to turn the audio file into searchable text as well as JavaScript to skip to the desired timestamp

Web Application

Our web application used a node server to communicate with Google Cloud Storage and Speech-to-Text API


Github