Learning is often framed as an individual activity. However, our teammate Natalia’s previous design research has indicated that learning with friends is usually more effective – and fun!
So we created a game for collaborative learning. Ggul-Jaem (which is Korean slang for fun) is a tangible collective learning tool that helps non-native Korean speakers learn the language.
In Ggul-Jaem, participants are assigned a team and each given tiles with Korean symbols. Players are asked to assemble a word, which requires they learn the meaning of their tiles, as well as how to work together. In this collaborative gaming environment, we hope players can expand their learning potential, and actively learn with and from their peers.
TIME: 2019 Spring | 5 Weeks
TEAM: Jamie Catacutan, Nathalia Kasman, Heather Lee
ROLE: Team Leader, Coder
TOOL: Sandblaster, Laser Cutter, Vinyl Cutter. llustrator
Software: ReacTIVision, TUIO Simulator, Processing
People learn best when there is human interaction involved. However, with the rise of language learning apps on the market, the language learning experience risks becoming isolated.
How might we bring back the social aspect of language learning, allowing people to learn from each other in a physical and collaborative environment?
1. Incorporate human interaction in the learning experience.
2. Cultivate a joyful and playful learning environment.
3. Create an environment in which peers can learn from and with each other.
Tangible Interaction refers to the concept of interacting with digital world using physical objects, gestures and behaviors in familiar or intuitive ways.
This game allows participants to go beyond the screen and physically interact with objects that reveal information about the subject matter.
To lighten the load of information, tiles are assigned to each participant. This both encourages group learning and enhances language-learning skills, as players will need to communicate to help their peers
This game engages two main senses when providing feedback on a placed tile: Hearing and seeing. Each participant is told whether they chose the correct tiles based on this multi-sensory feedback.
There were two roles in the teamwork distribution: The physical builders, and the program coders. We worked together to brainstorm, paper prototype and create the game's basic set-up. After that, we assigned tasks according to our roles. Jamie and I were the coders, and dove right in after we all agreed on the basic interactions of the tangible interface.
Physical prototypers: Nathalia, Heather
Digital prototypers: Jieying, Jamie
Korean words are made up of vowels and consonants. So, we put them on cards and tested the concept with Heather, who is our team's native Korean speaker. We started with pen and paper to test the learning experience.
In this video, we test whether the collaborative experience would help non-native Korean speakers learn better. As you can see, we ended up not only learning from each other, but having fun in the process. This experience wouldn't have been possible if we were learning by ourselves through a screen.
The set up consisted of:
-A round table with a translucent center.
-A projector used to reflect the image from the computer to the table.
-Plastic tiles labeled with Korean vowels/consonants on top, and fiducial markers underneath that are placed on the table. (See below.)
-A mirror underneath the table, which reflects so the projector's computer can display the accompanying visuals.
-An IR camera and light that reads the fiducial markers.
Because we wanted this experience to be as realistic as possible, we decided to code the entire game experience. None of us were engineers or programers, but Jamie and I were excited to take on the challenge.
We sketched out how the task flow would look like for this game in order to help ourselves modulate code components.
Just by looking at this image, the task flow may seem simple to understand. But for visual learners, it's difficult when the mental framework is reduced to 'input and output.' For time’s sake, we mostly focused on the gaming phase. The pre-gaming and ending phases are represented by a pre-recorded video demo.
This chart shows what input and output was needed to get the game going.
reacTIVision is an open source, cross-platform computer vision framework for the fast and robust tracking of fiducial markers attached to physical objects, as well as for multi-touch finger tracking. We used reacTIVision as the main software for the game.
Below is a sketch of how all the pieces connect and interact with one another.
Neither Jamie nor I were very familiar with programing, so it was a fun (and funny) experience to learn with each other. We even sketched out how a line of code would work to communicate with each other!
The project was heavily code-based, and the physical and digital prototyping teams were working separately after the paper prototype section. As the team leader, I had to align both teams with our objectives. I found the most effective way to communicate was through sketches.
This project taught me how team collaboration works in a real-life setting. With my responsibilities as both a coder and a leader, the main challenges I faced were distributing tasks fairly, and trusting my teammates to be responsible for their work. Luckily, my teammates were accountable and reliable. There were a lot of steps we needed to take, but none of them were actionable unless we could ensure the code would work for the interaction we wanted. Hence, Jamie and I had to code like pros over five weeks and keep the physical team updated. It was challenging, but also rewarding.
This project gave me another perspective of how collaborative learning can enhance learning outcomes, and bring fun into the process. The project inspired to me further explore Tangible Interaction, and ultimately laid the foundation of my future career path.