Lessons from AI

“Many teaching techniques have proven value but are hard to put into practice because they are time-consuming for overworked instructors to apply,” Ethan and Lilach Mollick (the Wharton School)– but they may not be so stretched if they learn to use AI

The Mollicks set out how AI can create content for five teaching strategies,HERE.

Granted large language models make stuff up but used judiciously, they suggest “implementing teaching strategies with the help of an LLM can be a force multiplier for instructors and provide students with extremely useful material that is hard to generate.”

So what happens to teacher ed students who use this for a lesson-plan assignment?

There’s more in the Mail

In Features this morning

An agency to oversight the Accord is one future for HE relations with government. Roger Smyth explains how it’s done in NZ, HERE and where it came from HERE

plus Sean Brawley and colleagues on how Uni Wollongong is learning to live with risk. Another instalment in their series on how the university restructured to revitalise admin, HERE

and Sarah O’Shea (Curtin U) reminds us that the Accord should address a fundamental purpose of universities: creating better societies.New in Commissioning Editor Sally Kift’s celebrated series, Needed now in learning and teaching

Life or death research stats

Clinical prediction models support medical decision-making – which patients get what treatment – it’s a problem when the data is doubtful

Adrian Barnett (QUT) and colleagues* analysed research paper abstracts to find statistical evidence of “hacking,” researchers fudging data to get a better “area under curve” statistic, which refers to risk probability.

This is a problem given how important clinical prediction is – as of January there are 4200 publications a week. “Factors driving poor model quality include inadequate sample sizes, inappropriate exclusions, poor handling of missing data, limited model validation and inflated estimates of performance,” they warn.

“Our results indicate that some researchers have prioritised reporting a “good” result in their abstract that will help them publish their paper.”

* Nicole White, Rex Parsons and Adrian Barnett (all QUT) and Gary Collins, (Uni Oxford) “Meta-research: Evidence of questionable research practices in clinical prediction models,” Open Science Framework, HERE

It a problem all-over, especially when no one has another look

A recent House of Commons committee report warned, “there have been increasing concerns raised that the integrity of some scientific research is questionable because of failures to be able to reproduce the claimed findings of some experiments or analyses of data and therefore confirm that the original researcher’s conclusions were justified,” CMM May 11).

Med research grants: what not do with AI

The National Health and Medical Research Council is working on an AI policy for funding – for now it has don’ts not do’s

As ever with the NHMRC, issues are thought through and implications addressed.

 “take caution” with applications: “it may not be possible to monitor or manage subsequent use of information entered into generative AI databases”

peer reviewers: “must not use generative AI tools to assist them in the assessment of grant applications as this would be a breach of their confidentiality undertaking”

updates to come: “as technologies develop and our understanding of risks and benefits is deepened.”

Wise move – what with nobody having a digital clue in a cyber bucket what new tech there will be in a month, or a week.

Colin Simpson’s ed-tech must reads of the week

Can large language models write reflectively from Computers and Education: Artificial Intelligence

Among the strategies often presented for designing assessment tasks to counter the GenAI menace is having students write personal reflections about their learning that unfeeling robots would be likely to struggle with.

While anecdotal evidence suggests that it might be catching up quickly, Li et al. from Monash and UniMelb note that this assumption is largely untested in current research. They found that ChatGPT is now able to generate “reflections” that outscore human reflections in pharmacy related subjects. Interestingly they also claim to have built a classifier that is able to detect GenAI produced work with higher accuracy than human assessors – assuming students work within a given set of relatively specific prompts that the classifier has been trained on.

***

Artificial Intelligence resources from TEQSA

One might hope that a regulatory body such as TEQSA would be up to speed on the emergent GenAI space, and their resources page is encouraging. Assembling a host of guides and explainers from Australian universities (including some that my colleagues and I worked on), you can find useful information on assessment, incorporating AI tools into teaching and engaging with students.

***

Impact of generated AI on the landscape of higher education – Webinar Thurs June 1, 1pm AEST from Higher Education Research and Development Society of Australasia

Rounding out our GenAI news, this webinar from the Victoria branch of HERDSA next week features Trish McCluskey (Deakin) and Danny Liu (USyd) as well as students from law, engineering, commerce, and neuroscience. The student voice is still somewhat underrepresented in these discussions, and it is good to see that being addressed.

The value of recognition programs: The new PSF and beyond – Webinar Thurs May 25th, 12pm AEST from ASCILITE TELedvisors Network

A common concern raised in learning and teaching circles in Higher Ed is that teaching is undervalued, and career progression is still overly centred around research prowess. A growing number of Australian universities are working to shift the culture by participating in the UK based Advanced HE’s HEA fellowship program. This programme is informed by the Professional Standards Framework and QUT’s Abby Cathcart provides an update of changes to the PSF and the larger implications for accrediting and recognising expertise in learning and teaching in the sector.

Colin Simpson has worked in education technology, teaching, learning design and academic development in the tertiary sector since 2003 at CIT, ANU, Swinburne University and Monash University. He is also one of the leaders of the ASCILITE TELedvisors Network. For more from Colin, follow him on Twitter @gamerlearner (or @[email protected] on Mastodon)

Taking the pulse of STEM

Science, technology, engineering and maths are great, although nobody has asked how the people doing them feel

So huzza for Science and Technology Australia which is surveying the STEM workforce, asking people about what they do, how they got into the biz and how it’s going for them.

It’s a first take of the pulse of the STEM sector, says STA’s Misha Schubert and needed, “to inform how policy settings shape  careers.”

Who collects from the MRFF

The NMHRC has long had an old bloke problem – they win lots of grants – so what’s happening at the Medical Research Future Fund?

The Fund’s 2021-22 report reveals a paunch of chief investigators who are on the far-side of 50. Some 869 (38 per cent) chief investigators over 50-59 won grants, so did 438 of the 60-69 year olds (35.6 per cent) and 58 of the 70 plus CIs (30.4 per cent).

But the per centage spread, is pretty much in-line with the other age groups, 40-49 is the largest and most successful; 1106 Cis (38 per cent).

With way more senior men (fewer career breaks for families) the gender split is pronounced – with 564 male lead applicants (Chief Investigator A) and 457 women. But the success rate is 31 per cent women and 27 per cent men.

However CIA men win more grants, and money, in all MRFF Themes – except research translation, where women picked up 57 per cent of the grants and 55 per cent of the funds.