Funny connection you made with the everything everywhere all at once movie with twine, which is another good example of entertaining storytelling. Lovely you got to live out your YouTube dreams in this blog. You were able to put your skills to use. I think I need to try this ramen now. Beautiful tangent about BTS’s endorsement of the ramen. You put your all into your video, it looks delicious. I’m wondering do you have any experiences of really bad stories. Or any stories that aren’t super sad but you still enjoy?
In this module, we looked at techniques for telling effective stories.
I do not think I am good at telling stories. Though I haven’t had to form stories in this sort of educational format where I put more thought into my structure and presentation. I think naturally I fill my stories with a very personal touch and get my audience involved. I can create suspense surely but I also think it helps if the story I am telling is inherently suspenseful which is probably why I am telling it. It would be harder to make a boring story or a story that hides a lesson to be suspenseful. That takes a lot more construction and manoeuvring on the side of the storyteller.
In my life, I have met a lot of people who are horrible at telling stories and I remember that much better than all the good stories. When I think of stories too I think much more about books and movies that stick with me. There is a quote from someone that I love and it goes like, I can’t remember the books I have read any more than the meals I have eaten, yet they have made me. I think that speaks a lot to me and the power of stories, I really agree with it and especially because I have a horrible memory. Also, in terms of teaching and being able to absorb lessons from stories comes from this idea. If the audience can tell you are trying to bake some sort of meaningful lesson into a story it becomes boring or condescending quickly I think. But when the story is really good and uses those different storytelling techniques, you can’t even tell that you are learning something and this leads to a greater chance the audience will take something away from it. It is like when you have to hide your dog’s medicine in a treat or something like that.
I used to love branched narrative stories when I was a kid or how I knew them as, choose your own adventure books. I think it was American Girl that made these books for little girls where you would have to make hard decisions that would lead you down different paths. I remember I always read one about some sort of dog adoption storyline. Those are always great. I remember books like these being very popular with the kids. The format provides a more active way for the reader to be involved and be able to make decisions about the story.
The Bill Gates Malaria Ted Talk actually made me gasp when I read that he opened a jar of mosquitoes at the end. That for sure really resonated with people, very much show don’t tell.
Using Twine at first was overwhelming but once I got the bare basics using different resources I was able to begin. I knew I wanted images though because playing this little game with sole text is quite boring. I just wanted to write something fun and silly to experiment with the program. It is really fun to use actually. One of the videos I watched said something like the program makes you think in a non-linear way which is true and cool. It makes you think about your story in a different way and come up with new branches and diversions.
If you say no:
If you say yes:
To silence:
To music:
I find watching videos much better for learning things like this, the cookbook for Twine was kind of outdated, not all the information worked on this current version of the program and I didn’t want to read all that text. However, I was able to easily find these YouTube videos that guided me much faster.
So I learned how to start working with Twine with this video:
and learned how to import images with this video:
Then I created my learning video on my phone. I created a script and a storyboard.
Here is the link to my video: https://vimeo.com/1030568031?share=copy
When filming on my phone vs a screencast I forgot about how much the camera moves. I made sure to use the camera stabilizer on imovie to still my video. I think if I was to make a more clean video I would need a tripod, or even better what I would need is another person to hold the camera.
My comment on Natasha’s Blog:
Funny connection you made with the everything everywhere all at once movie with twine, which is another good example of entertaining storytelling. Lovely you got to live out your YouTube dreams in this blog. You were able to put your skills to use. I think I need to try this ramen now. Beautiful tangent about BTS’s endorsement of the ramen. You put your all into your video, it looks delicious. I’m wondering do you have any experiences of really bad stories. Or any stories that aren’t super sad but you still enjoy?
Hi Natasha,Â
I liked your little questions after each section of your blog. It was like a fun little test to see if I read what you wrote which is funny.
I love the idea of using AI to make quizzes for you based on your course materials. That’s such a smart way of using the tool. It reminds me of the app Quizlet that everyone used back in high school. Need to start doing that.
Nice noticing that simply by looking at your transcripts you could tell what classes you were actively or passively learning in. I think that’s a correct observation and would be true to anyone.
Do you see any way you could increase your active learning in classes like computer science where you feel you don’t have an innate interest or the material is simply dry?
Hi Markus,
I like how you phrased your revelation on inclusive design, realizing that you should start creating something with the anticipation of needing to accommodate different needs instead of taking something that already exists and trying to make it accessible after the fact. What you are essentially doing is using the backwards design method we learned about in the last module to now think about creating things for accessibility, so that’s great!
I’m glad text-to-speech tools worked out for you and that you found them helpful. I cannot absorb information from just listening, and, as you mentioned, the robotic voice doesn’t help much either.
And yes, the wave accessibility tool was disappointing. We had a very similar experience.
To answer your question, there are many tools online that can check how your work might look to someone who is colourblind. Some of them can be helpful in different ways, such as checking the contrast of your text or how different colours will read. I use these mostly for design purposes.
When you say you will try to use generic and plain text, what do you mean exactly? If by that you mean limiting yourself and your writing to very simple sentence structures and less flowery language I think you would be doing a disservice to yourself and others. Making something accessible doesn’t mean dumbing it down. But if you meant like fonts then yes I agree haha.
In this module, we looked at design principles and accessibility!
Wave check:
Firstly I put my first blog post into Wave Check and most of the corrections it gave me were quite misguided. They were things I couldn’t change like things that are part of the website and links to my other blogs at the bottom. It told me there were structural issues and said I had redundant titles on the date and time of people’s comments.
So this program can’t really do anything for me in this context. I have a better eye than a machine. The only thing I could take away from this and consider is labeling my image at the beginning but it is just decorative and doesn’t really need a description. It’s an interesting program but limited in its abilities. It reminds me of last week’s AI module and how we need to continue to supervise these kinds of things.
I could see how in a different context on a different type of example this could help to realize how your work looks to someone with different abilities though. It is nice to have a different set of eyes to look at your design choices in terms of accessibility.
Text to speech:
The example of the fonts being read aloud in a tweet was interesting, I didn’t know using weird fonts like that messes up some voice readers. That should really be tuned.
I used Read-Aloud to read out some of the text from my blog. I’ve used systems like these before to read my work. It is a good way to spot any grammatical errors in your writing. It makes it easier to spot missing punctuation or run-on sentences when you hear someone speak them, even if it is a bit stilted. I find too that when reading my text and the way I like to write in my voice, the robot sometimes can’t pick up on that. It is kind of like Grammarly as well where I will get corrections for my text that just change my text in a way that isn’t easier to understand but the voice decided is better, which can be frustrating. I can write a perfectly fine sentence but a robot could read it in the wrong rhythm or something like that.
I sometimes find text-to-speech hard to follow along with. Speed wise and tone wise. So I don’t use it often for actually understanding a piece of text, I’d rather read it myself where I can set the pace but I do use it to check errors in my writing or to simply hear what my writing sounds like and for that it is helpful. I also see how this could be helpful for students who can’t read for long periods of time and find it easier to listen. These programs are already so good for that but It’d be nice in the future when these voices become less robotic and more fluid for that purpose.
Canva Infographic:
I decided to make my infographic about my anthropology of sound class where we learned about sound walks and conducted one of our own. I thought I could lay out the main principles of a sound walk in this layout.
I’ve used Canva before so I’m familiar with it. I followed the design principles we learned about in this module to make it. I kept my design simple with minimal colours and fonts.
And harkening back to Mayer’s principles of multimedia, I knew adding visuals along with the text is a way of communicating more intuitively. So adding the images of an ear and sound waves is just a small way of connecting ideas from the text to the visual.
In terms of accessibility, captions are certainly the most prominent example. I rarely ever watch media without them. I think in terms of learning environments and the universal design for learning the guideline of engagement is prevalent in a lot of my courses. Teachers often offer flexibility in ways of presenting course work for example. Someone might prefer an essay while someone else might do a class presentation, these sorts of options for students I’ve seen in many of my courses and I find them very helpful. They allow everyone to choose how they best can present their learning.
My comment on Markus’ blog:
This module examined the different ways active learning is incorporated into effective lesson plans. We were also introduced to HP5, a new tool for creating interactive learning activities.
Several new foreign teaching vocab were thrown at me in this module including but not limited to, backward design, Merrill’s five principles of instruction, Bloom’s Taxonomy and scaffolding.
Although these terms were all new to me, like other terms I’ve learned in this course, they end up being quite simple ideas maybe even seemingly obvious ideas of teaching and learning that are organized into these unfamiliar structures.
Backward design, was the most straightforward to me. Keep the goal in mind before building the lesson plan so you know how to attain your goal. Makes sense. This is something I know I encounter all the time when looking at my courses’ syllabi. Those always include things like learning outcomes and objectives that the students should accomplish and then the lessons that will lead to those goals. I also see backwards design in courses like this one where many of our assignments act as sort of building blocks for our final assesment goal.
Merrill’s Five Principles of Instruction have to do with initiating hands-on problem-solving and learning. It is about making lessons meaningful to learners like relating assignments to real-world events, or issues.
Learning what Scaffolding is brought back many memories of elementary and high school, where I feel this method was used a lot in settings like science class or art. For example, the teacher would first demonstrate a lab before assigning us to do the same thing individually. This learning design I find myself particularly connected to and I know works well for me.
Simulation was brought up a lot in these as an example of how media can enrich a learning experience. This is quite an interesting idea for the future when things like vr get more advanced and we can see how it is used in a classroom setting.
My experience using H5P wasn’t half bad.
I can see myself using it again if need be. It’s a handy way to enrich a video.
Here is the lesson design template I filled out about the interactive video I made. It details the learning outcomes and activities required to be successful in viewing an interactive quiz video like the one I made:
I really wanted my video to be related to art history so what I did was I found an exhibition tour video from the MET explaining a pair of portraits. I took a snippet of this longer video and turned it into an interactive video.
I thought I could take this video and make it more of an active learning experience with H5P. Really it is an active listening quiz. So students have to really open their ears and be able to show their listening by answering the questions that interrupt the video.
The interactive video I created using H5P:
Sources:
My comment on Natasha’s Blog:
Hi Natasha!
I totally agree with what you said about checking the validity of Ai being the hardest thing. I had a similar experience where I was asking ChatGPT for information and it just completely lied to me and I had to correct it, which is crazy!
I didn’t know how wrong Ai could be with computer science questions, but I guess it makes sense with what we learned this week that Ai is really good at creative human things and not at these high tech type questions. At least not always. Even with the shark tank game you played it’s super interesting how Ai can imitate and handle these scenarios full of cultural knowledge so well.
Do you think if someone asked Ai to write a script for a show like shark tank that you would be able to tell the real script from the Ai one?
Nice blog post!
This week we played around with Ai.
Before this module, I knew very general things about Ai and had only played around with ChatGPT and various free image generators. I learned through this week’s readings much more about Ai and how best to utilize it. I’ve always been a bit weary of generative Ai tools but I think through this process I became a bit less scared of using these programs available to me.
What I learned:
LLMs are large language models, these represent a sub-section of generative Ai tools like ChatGPT, Bing Chat, and Gemini Google that produce human-like text based on prompts and in some cases are connected to the internet.
This is what ChatGPT gives as a definition of generative Ai:
“Generative Ai is a type of artificial intelligence that creates new content—like text, images, music, or video—by learning from existing data. It uses models like GPT for text or GANs for images to generate realistic, creative outputs, mimicking human-like creativity. It’s used in various fields like art, content creation, and entertainment.”
I learned about how Ai is basically undetectable, how it is based on prediction and how even it has its limitations. For example, 10-20% of what they produce is hallucination, it’s not real and they are not good at understanding the text they are putting out, there is also a large environmental impact of using so much power.
I learned about SAMR and TPACK, two models of evaluating the use of technology in learning.
What I did (Prompting Ai):
I chose to use Gemini Google because unlike ChatGPT which I am more familiar with. This LLM is connected to the internet. I made sure to listen to the advice given to me for talking to an Ai like giving the Ai a role to play. Telling it who it was. So I told it it was a student at university. I broke my request down into steps like I was suggested to do to help it process what I wanted from it and not have it get confused along the way.
This is what I asked of Gemini Google:
You are an education student in university writing about the pros and cons of the SAMR model of learning. First, give 3 examples of substitution in the classroom and secondly describe their pros and cons in terms of learning.
This first response was lengthy but pretty accurate. It understood what SAMR model was and what I wanted and assumed the role of an education student. It delivered correct information on the types of substitutions and their cons in the classroom. My one complaint with this was that it was too lengthy so I tried again:
Could you shorten this information down to be more bite sized?
This was much better. The information was made more digestible but I didn’t like that the pros and cons weren’t specific to the examples of substitution it gave me, so I tried again:
Could you write that again but with the specific cons attached to each example of substitution.
With my third try I finally got the result I wanted. It definitely helps to pick at the Ai to get it to do what you want. Doing this did make me feel a bit bossy though.
Then I repeated the same process but with a slighlty more specific prompt.
You are an education student in university. You will write a short TPACK analysis of the use of a Generative AI tool for learning, specifically gemini google.
Then I felt stuck in a loop trying to get different formats of the answer. I kept getting the same format and undergoing the same shortening process. I found it hard and a bit tiring to type such direct instructions to the AI. I know that is the easiest way for it to understand our requests, but it makes me feel bad.
A lot of the answers and things Ai generates can be very cookie-cutter perfect, in a way that becomes bland. I found it fascinating how the term Ai generated is associated with things that aren’t very good. It is true that Ai allows for creativity to be accessed by more people and can make creation faster. However, there are still so many things about Ai, like using artists’ content from the internet without consent and a lack of critical thinking that limits it and makes it sometimes a questionable source of help. This module did show me different ways however in which I could use Ai to benefit my learning experiences. Ai is not perfect but it is a growing tool that cannot be ignored and we should all learn how to use effectively.
Link to Natasha’s blog I commented on:
Ai Citations
“Define Generative Artificial Intelligence, a short and sweet version plz.” prompt. ChatGPT, OpenAI, 10 Oct. 2024, https://chatgpt.com/?temporary-chat=true.
“You are an education student in university writing about the pros and cons of the SAMR model of learning. First give 3 examples of substitution in the classroom and secondly describe their pros and cons in terms of learning.” prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/e1f6fea957ad0cb2.
“Could you shorten this information down to be more bite sized?” follow-up prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/e1f6fea957ad0cb2.
“Could you write that again but with the specific cons attached to each example of substation. ” follow-up prompt. Gemini Google, 1.5 Flash, 10 Oct. 2024, https://gemini.google.com/app/e1f6fea957ad0cb2.
Hi Markus,
I like that you accommodated your screencast content to be more non-technical for those who might not be familiar with what you are talking about. Because I think the intrinsic load of what you are trying to teach is fairly high already you were able to minimize that. Saying that though I still had no idea what you were talking about (I am not in computer science) haha nice try though. Your screencast was very relaxing.
I like that you went into your personal experiences to describe examples of load theory. To answer your question: I had experiences with extraneous load when it came to teachers who began to teach things that weren’t a part of our learning curriculum. They would slip up and start to teach ahead when the unit or whatever we were working on didn’t include that.
Now, I have a question for you.
How would you label your video perhaps? What would you title it so students could decide if this was at their level or not?
Nice blog post!
First Blog Post… (screencast video down below)
As I am an art history student, not an education student, I am very new to all this language and ideas of learning. For example, I had never heard of cognitive load theory or Richard Mayer’s principles of multimedia learning or Allan Pavio’s dual coding theory.
So for this week, I learned how to use Screencastify to record a short educational video with Google Slides. The video might look simple but creating it was a frustrating experience and took me some time. Keeping in mind that humans are dual-channelled and that I need to mix various media, I began creating my Screencast. I made slides of different colours and fruits with the corresponding colours. Then I wanted to find a child’s voice saying the colours so I found a video and downloaded the audio. It took that audio and spliced it so I had the colours I wanted and created a new audio file to then put into my presentation. The last step was recording my screen whilst I timed the audio to each slide.
While making my screencast I kept in mind the principles of modality and redundancy. Both state that we learn best from visuals and narration, not visuals and text, so I made sure to not include any text and find audio narration. I also implemented the voice principle, which states people learn better with real voices, not robots and I thought that since my video is intended for small children, having a child speak the colours would be more inviting.
What I found challenging was the simplification of it all. It is very tempting to add more flair to these presentations, with more text and everything. However, I had to try hard to avoid adding distracting elements that would just become an additional load or extraneous load.
I imagine the audience for this screencast would be very young children who are still learning colours and their associations. It’s about getting the bare basics of these names of colours and what they might represent by hearing them over and over in association with the images on the screen. I would’ve made it longer but I think this gets the point across for us adults.
As an art history student much of my learning is presented to me in visuals next to text, so learning about dual coding theory was interesting to me. Many of these principles as I could tell, were just basic visual design principles, how to make something look good and understandable. It was interesting to hear those ideas shared in these videos like it wasn’t something already self-explanatory, like the rule of spatial continuity, for example, or signalling. A principle that I found interesting was the segmenting principle, which in one of the videos they compared to how a book has chapter breaks.
Many of the principles were pretty intuitive to me when I learned them, like signalling and spatial continuity but the redundancy principle I haven’t always followed too well. As an artistic person sometimes I like to take the simplicity out of things, so this is something I will try to focus on moving forward. I did however learn through this assignment about cognitive load, intrinsic difficulty, extraneous load and germane knowledge were new terms to me and helpful in categorizing these different principles that I will now try to remember and integrate into future assignments.
My Video:
Link to classmate’s post that I commented on: