A climate change curriculum: how? what? when? and for whom? (Summary)

Tags

, , , , ,

Today I tuned into a webinar given by Dr Gwyneth Hughes and Dr Peter Cannell who respond to the growing interest in climate change education in Higher Education and explore how to address climate change in all disciplines. The webinar also reported findings of a research project using the University of London’s distance and online programmes. The webinar presented the findings from a small survey and interviews with programme directors on how they are teaching climate change in a range of disciplines.

Links to freely available and open access climate change resources were also shared: https://www.open.edu/openlearn/education-development/supporting-climate-action-through-digital-education/content-section-overview?active-tab=content-tab

Here is an AI generated (Claude) summary of the webinar:

Here is a 597-word summary of the key points from the document:

The document is a transcript of a webinar on incorporating climate change into university curricula. The speakers discuss the importance of educating students on climate change, challenges with teaching the interdisciplinary topic, findings from a project examining how climate change is currently incorporated in University of London distance learning programs, and suggestions for improving climate change education.

Key points:

– Climate change is an urgent issue that will impact students’ lives, so universities have a responsibility to educate students on it. However, it crosses many disciplines making it complex to teach.

– A University of London project surveyed distance learning program directors and found mixed opinions on whether climate change should be a separate module vs embedded across curricula. 73% promoted student discussion on climate change but often informally.

– There is limited insight into what distance students already know and want to learn regarding climate change.

– “Climate engaged” educators can connect climate change to their discipline but need time, resources and updated materials to systematically incorporate it into curricula and assessments.

– Transdisciplinary dialogue on climate change helps build understanding. A case study of a divinity program aiming to improve modules and run climate change events showed the value but need for resources.

– Suggestions included: leveraging global partnerships for varied expertise and perspectives; focusing on specific geographies; starting climate change clubs to raise awareness then build out; having core sustainability modules.

– Key obstacles are lack of time, resources and support to update materials and teaching approaches for this evolving, controversial topic. Dedicated curriculum teams would enable more dynamic, discussion-based pedagogy.

In summary, climate change integration is complex but critical. Universities recognize this and more support is needed to systematically incorporate it across curricula, especially for distance learning.

Using generative AI for creating learning materials

Tags

, , , ,

Here is a summary of a webinar given today by Gary Fisher from Derby University from the series ‘Teaching Across the Globe’ on ‘Using generative AI for creating learning materials’. The text was entirely generated from AI. I downloaded the Zoom transcript and uploaded it to Claude which then wrote the summary. (I had to miss the last 10 minutes as I had a tutorial to go to).

The document is a transcript of a webinar presentation by Gary Fisher on using generative AI to create online learning materials. Fisher works at the University of Derby, which has a large portfolio of online and distance learning programs. A key component of these programs is asynchronous online content – teaching materials posted on the virtual learning environment for students to access anytime.

Fisher conducted an experiment comparing a piece of human-generated online course content that went through the university’s rigorous multi-stage production process, to a piece of content generated by ChatGPT with prompt engineering. He iteratively provided prompts to ChatGPT over a lunch break to produce AI-generated content explaining the same concepts as the human-generated content.

The two pieces of content were analyzed using Loop22 for psycholinguistic analysis and via a student survey on Prolific. Loop22 suggested that while analytic thinking and confidence levels were similar between the two texts, the AI text exhibited greater authenticity and more positive emotional tone. The student survey also found more favorable judgments of the AI text over the human text. Interestingly, when students were told the AI text was human-generated, their perceptions improved even more.

Fisher notes some limitations – the study has not undergone in-depth qualitative analysis with student focus groups, and the content has not been evaluated for accuracy by subject matter experts. He suggests further analysis on what academics think of AI-generated course content.

Ultimately, Fisher concludes that with the right prompts, ChatGPT can produce content resembling online course materials in a much shorter timeframe than humans. But he still believes AI will augment rather than replace academics, envisioning an eventual process where humans utilize AI tools but still quality check output. The study provides an early experimental evaluation of AI capabilities and impact on student perceptions in a narrow test case, but perceptions and AI system abilities will continue evolving.

Stolen from God

Tags

I went to see Reg Meuross, Suntou Susso and Cohen Braithwaite-Kilcoyne at Kings Place last night.

The night was opened poet Jenny Mitchell and then Reg, Sunto and Cohen tackled the history of the transatlantic slave trade with a stunning song cycle. Narration was delivered by Folk On Foot/BBC Radio 4’s Matthew Bannister.

Stolen from God is a fantastic album. Check out one of the album’s song, Good morning Mr. Colston here:

See Reg’s website for more details about the album

Two old fools and some songs about ghouls!

Tags

Also check out dj W’s solo show if you’re in Camberwell this Saturday:

UAL STANDS WITH PALESTINIANS

Tags

Please sign this open letter:

UAL STANDS WITH PALESTINIANS (openletter.earth)

Selected extracts of the open letter:

“Almost 90,000 Palestinian university students cannot attend university. Over 60 per cent of schools, almost all universities and countless bookshops and libraries have been damaged or destroyed, and hundreds of teachers and academics have been killed, including deans of universities and leading Palestinian scholars, obliterating the very prospects for the future education of Gaza’s children and young people”

“We stand with our Palestinian colleagues and pledge to do everything in our power to support and rebuild education and cultural institutions in Gaza and across Palestine”. 

“As artists, teachers, scholars and students, we cannot remain silent in the face of this unfolding genocide. We have a responsibility to support our Palestinian colleagues and build meaningful solidarity. We call on every member of the UAL community to support this call before it is too late”.

Lego and AI workshop

Tags

This post was entirely written on my phone whilst killing some time in a cafe …so might need some editing later.

Photo of written activity and lego bricks before the task was completed.

Today we had a guest facilitator from Imperial College London, Coco Nijhoff who did a workshop entitled ‘Reflecting on the impact of generative AI on our roles’ which she delivered to staff here at UAL . The two hour workshop used the the Lego Serious Play method to address the learning that has been required as the AI technology evolves at speed across HE. Participants used Lego to build models as a means to ‘reflect, consider and share experience, focusing on the challenges of keeping up with generative AI and adapting accordingly in roles in HE contexts’.

Workshops objectives:

1. To understand the impact on generative AI within our roles.

2. To explore our hopes for generative AI in Higher Education.

We started with some simple activities to get us playing with the Lego bricks. Our first task was to build a tower and thus was quickly followed by us building a structure based around a metaphor. We then had a general discussion about the usefulness of using the lego as a learning activity.

My job and how its affected by AI in Lego.

We next had two further activities that really got us thinking about our own roles at work and how AI is affecting the work that we do (see the photo above).

Finally whilst working in small groups we built a model of how AI if affecting Higher education. This what our model looked like:

AI and HE.

We all feed back to the whole group on our models and some common themes emerged:

1. HE needs to create safe spaces if we are to use AI in our teaching and leaning.

2. AI is affecting HE in contradictory ways. On the other one hand it is allowing creativity to blossom, there are efficiency gains and new ways of doing things. But on the other hand it can stifle creativity, reduce efficiency and just add to our already heavy work load.

3. We need to develop staff and students digital literacy skills so that they can critically evaluate AI in education.

3. We really don’t know what the future holds for us as to the impact of AI in Higher Education. We can guess some of the consequences but like other form of technology it often gets used in unexpected ways.

Later on after the session I had a little chat with Coco about AI literacy. The photo above is here Lego model of AI literacy that incorporates:
1. Competencies
2. Skills
3. Activities

All in relation to a constantly changing environment …to develop skills and make the world more transparent

The universe!

However the issue of AI literacy is a whole new issue and will have to wait for another blog post!

No Twickenham Arms Fair

Tags

Sign the petition…

Take action to demand the Rugby Football Union revoke permission for arms fairs to use Twickenham Stadium…. it only takes two minutes to complete:

No Twickenham Arms Fairs! (eaction.online)

Photo from London demo yesterday Sat. 13th Jan. 2024

Generative AI and the Automating of Academia: Insights from Dr. Donna Lanclos and Lawrie Phipps

Tags

, , , ,

The following blog post was created entirely by AI (MS Teams/Claude/ChatGPT/DALL-E).

Image: Generative AI and automating academia in the style of Bauhaus

The landscape of academia is undergoing a seismic shift with the advent of generative AI tools like ChatGPT. A recent discussion featuring Dr. Donna Lanclos, an anthropologist, and Lawrie Phipps, an educational developer, shed light on this transformative era. Their research, involving a survey of approximately 500 UK academics, offers a critical perspective on the integration of AI in academic practice and thinking.

Disrupting Academic Values and Status

One of the striking findings from their study is the potential of AI to destabilize traditional notions of status and value in academia. There’s a growing concern that the emphasis on efficiency and output, driven by these technologies, might overshadow the intrinsic quality of academic work. This trend could lead to a reevaluation of what constitutes valuable work within the academic sphere, potentially upsetting established hierarchies and standards.

Automating Drudgery vs. Losing Creative Engagement

Another theme that emerged is the dual-edged nature of automation. While AI tools promise to relieve academics of repetitive tasks, there’s a palpable fear that this might come at the cost of diminishing opportunities for meaningful human creativity and care. The essence of academia, which thrives on critical thinking and innovative exploration, could be at risk if the human element is excessively automated.

Exacerbating Work Culture and Inequalities

The research further highlights the risk of AI tools aggravating the existing, often unhealthy, work culture in academia. Instead of freeing up time for more substantive work, there’s a possibility that these tools might deepen the prevalent inequalities and intensify the pressure to produce more in less time.

Outsourcing Academic Work and the Gig Economy

An intriguing aspect of the research touches on how some academics are already outsourcing work to gig economy contractors. The hope is that tools like ChatGPT could optimize efficiency and reduce labor costs, but this approach raises critical questions about the perpetuation of a toxic, hyper-productive academic culture.

The Creative Potential vs. Profit-driven Priorities

Despite the concerns, the research also uncovers a genuine interest among academics in the creative applications of AI. The challenge lies in reconciling the profit-driven motives of technology companies with the social mission of higher education. It’s essential to understand the motivations behind using these tools and to identify what kind of work is deemed valuable enough to warrant automation.

Reconsidering Assessment and Digital Literacy

The discussion also brought up the flaws in current assessment practices, particularly the overemphasis on final products rather than the creative process. There’s a growing call for reevaluating the value of hands-on teaching and mentoring, which are at risk of being seen as automatable tasks. Moreover, there’s a strong argument for involving librarians, given their expertise in digital literacy, in AI implementation decisions, despite their already immense workload pressures.

The Role of Marketing and Political Context

The conversation highlighted how marketers often stoke fears about job loss and the inevitability of AI adoption, conveniently overlooking the ethical considerations, inequalities, and real human impacts. There’s a discernible gap between genuine academic research on AI and the hyped-up corporate propaganda targeting university leadership. Additionally, the political assault on public funding for higher education in recent decades provides a backdrop for understanding the infiltration of private profit motives and managerial metrics in academia.

Concluding Thoughts

In their key quotes, Donna and Lawrie encapsulate the core concerns and the target audience of the rhetoric around generative AI. The conversation underscores the need for a balanced, ethical approach to integrating AI in academia. It’s not just about whether these tools are used, but how and why they are used, keeping in mind the broader social, ethical, and political contexts. The insights from this research call for a cautious yet open-minded approach to navigating the AI revolution in academia, emphasizing the importance of maintaining the human essence of academic work amidst the technological transformation.

The Intersection of Art and Technology: A Journey from the 1960s to Today

Tags

, , , ,

The following blog post was created entirely by AI (MS Teams/Claude/ChatGPT/DALL-E).

The Beginnings: Cybernetics and Art in the UK

In the late 1960s, a groundbreaking shift occurred in the UK’s art scene as artists began to integrate computers and artificial intelligence into their creative processes. Catherine Mason, an expert in this field, has delved into the history of this evolution, focusing on a central question: Will machines amplify or supersede the artist?

Cybernetics, a science concerned with communication and control theory, played a crucial role. It provided artists with a new framework, emphasizing control, communication between art components, and feedback from the environment or audience, enabling interactivity in art.

Pioneers of the Computer-Art Movement

Among the pioneers was Gordon Pask, known for his interactive artworks that explored the human-machine relationship. Another influential figure was Richard Hamilton, who envisioned a fusion of human and machine, leading to the concept of a “technological superhuman.” His notable students included Roy Ascott, who created innovative “change paintings” that allowed viewers to interact and modify the artwork.

Ascott’s “Ground Course” classes were ahead of their time, preparing students for a technologized future. Interestingly, music and visual arts innovator Brian Eno was among his students. These classes, although devoid of actual computers, used analog devices to impart cybernetic concepts.

Landmarks and Collaborations

The 1968 “Cybernetic Serendipity” exhibition at London’s ICA marked a significant moment, bringing together artists, scientists, and computing experts. This exhibition led to the formation of the Computer Arts Society in 1969, aiming to facilitate artists’ access to computers.

Collaborations between artists and technicians became a necessity due to the limited access to technology. A notable example was the 1971 animation “Spinning Gazebo,” created by graphic designer Clive Richards and computer programmer Ron Johnson at Coventry Polytechnic.

Educational Contributions and Advancements

The Slade School of Fine Art established an Electronic and Experimental Art department in 1974, under Malcolm Hughes. This department played a critical role in educating artists in coding and technology-driven art creation. Alumni like Paul Brown and Harold Cohen, who developed the Aaron drawing robot, were instrumental in advancing digital art practices.

Conclusion: The Legacy and Future of Digital Art

Catherine Mason’s exploration concludes that the digital art practices we see today are deeply rooted in the endeavors and innovations of over half a century ago. This historical perspective not only sheds light on the evolution of the human-machine relationship in art but also hints at the potential future directions of this dynamic and ever-evolving field. Understanding this history is key to appreciating the depth and complexity of today’s digital art landscape, as well as envisioning its future trajectory.

Navigating the Complexities of Machine Translation in University Education

Tags

, , , ,

The following blog post was created entirely by AI (MS Teams/Claude/ChatGPT/DALL-E).

In a recent video conference, Chris Rowell engaged Helen McAllister and Jo Bloxham from the University of the Arts London (UAL) Language Centre in an enlightening discussion about the use of machine translation tools by university students. Their insights shed light on an increasingly relevant topic in today’s multilingual educational environments.

Image created by DALL-E: AI and Languages

The Emerging Landscape

The UAL experts began by exploring various scenarios in which students employ machine translation tools. They prompted the audience to consider the appropriateness of these tools in different contexts, emphasizing the importance of factors such as the purpose of use, critical application, the extent of editing, and the impact on learning outcomes.

Machine Translation vs. Generative AI

Helen McAllister made a crucial distinction between machine translation and generative AI. Since 2016, machine translation has seen significant advancements, primarily due to the advent of neural networks and enhanced computing power. Interestingly, while many universities have developed policies around generative AI, the more pervasive use of machine translation often goes unnoticed. This oversight raises critical questions about authorship, learning outcomes, and the linguistic competence of graduates in an environment increasingly dependent on technological assistance.

Student Perspectives and Teacher Cautions

Jo Bloxham shared insights from UAL student focus groups, revealing a spectrum of opinions. While some students found these tools boosted their confidence, helped manage workloads, and provided easier access to resources, others expressed concerns about over-reliance, inaccuracies, loss of original meaning, and questionable authorship. Interestingly, students generally favored guidance over strict regulation.

External research echoed these findings, showing a more cautious stance from teachers, especially concerning the use of machine translation for writing as opposed to reading. A crucial debate emerged around whether these tools serve to level the educational playing field or merely act as a crutch.

Policy and Guidance Development

Helen highlighted how some universities are grappling with issues of authorship, learning outcomes, and misconduct related to machine translation. The development of guidance for both students and staff is critical. For instance, a student handbook from one university encourages judicious use of these tools, while staff guidance focuses on compassionate and critical approaches to their use.

Broadening the Discussion

The conversation emphasized that machine translation tools, while inevitable, require careful consideration, especially regarding the potential loss of meaning. It’s vital that guidance be inclusive, catering to all students, including those who speak English as an additional language, and considering the perspectives of multilingual staff.

The presenters argued for greater transparency in academic writing processes, debunking myths of inherent genius and emphasizing the need for human editing and critical thinking.

Conclusion: A Call for Open Discussion

McAllister and Bloxham’s discussion did not seek to provide definitive answers but rather to spark a broader conversation and encourage critical reflection on the use of machine translation in academia. Their insights highlight the complexities and nuances of integrating technology into learning environments and underscore the need for thoughtful, inclusive policies that enhance educational integrity while embracing the realities of a digitally interconnected world.


This blog post encapsulates the key themes from an engaging dialogue on machine translation in university settings. As educational institutions navigate this evolving landscape, the insights from experts like McAllister and Bloxham provide valuable guidance in striking a balance between technological advancement and educational integrity.