SPEAKERS

Emeritus Professor David Andrich
The University of Western Australia
David Andrich is Emeritus Professor of Education, The University of Western Australia. His interests are in educational, psychological and social measurement in general, where he is best known for his work on Rasch measurement theory, including its applications through software development. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He has published in Educational, Psychological, Sociological, Statistical and more recently, Physics measurement journals. In addition to many articles, he is the author of Rasch Models for Measurement (Sage, 1998) and coauthor of the A Course in Rasch Measurement Theory: Measuring in the Educational, Social and Health Sciences (Springer, 2019). His more recent work has been on the study of growth in attainment tests reflected in the publication Andrich, D., Marais, I. and Sappl, S. (2023) Rasch Meta-Metres of Growth for Some Intelligence and Attainment Tests. Springer, Singapore.
Title: Rasch Meta-metres of Growth in Reading and Mathematics Attainment at Both Population and Individual Levels
Although hardly known, Georg Rasch had an approach to studying growth based on the principle of invariant comparisons, a principle for which he is well known with his models for measurement. The approach identifies a non-linear function of time, called a meta-metre, which governs the growth of all individuals of a population. Then within the meta-metre, each individual’s rate of growth is linear and invariant, thus permitting comparisons among individuals using standard statistical procedures. This address illustrates the approach with the educationally important variables of reading and mathematics attainment tests from two longitudinal studies. Each of the meta-metres show early rapid, decelerating growth, with noticeably different rates of growth among sub-populations. Decelerating growth is also related to the common grade scale, showing that any grade difference between groups in the early years invariably increases in later years. This increase has implications for interventions for groups at risk in their attainments.
Title: Adapting the Rasch Meta-metre of Growth for Variables in the Social Sciences
Georg Rasch is well-known for applying his principle of invariant comparisons to provide interval level measurements. He also studied physiological growth applying the same principle. His approach identifies a transformation of time, called the meta-metre, within which every individual’s rate of growth is linear and therefore also invariant. Adapting Rasch’s approach to growth for social science variables requires interval level measurements. For studies of growth, the instruments of measurement generally require a linked design of increasing item difficulty that ensures items are successively aligned to each individual’s stage of growth, perhaps across 10 years. The workshop has two parts: first, it shows the invariant properties of items necessary for linked designs and an efficient approach to diagnosing any lack of invariance; and second, it introduces an adaptation of Rasch’s approach to estimating a meta-metre of growth for social science variables with illustrations from simulated and real data.

Professor George Engelhard Jr.
University of Georgia
Professor George Engelhard, Jr., PhD, is at The University of Georgia. Professor Engelhard received his PhD in 1985 from The University of Chicago in the MESA Programme (measurement, evaluation, and statistical analysis). While he was at The University of Chicago, he worked closely with Professors Ben Bloom and Ben Wright. Professor Engelhard is the author of several books including his latest with Dr Jue Wang: Invariant measurement: Using Rasch models in the social, behavioral, and health sciences (2nd Edition). He is currently president of the Pacific Rim Objective Measurement Society. He is a fellow of the American Educational Research Association.
Title: Measurement, Explanation, and Invariance: Next Generation Invariant Measurement
Science is built upon relationships between measurement, explanation, and invariance. My address emphasizes that the principles of invariance are essential to measurement and scientific objectivity. Rasch measurement theory highlights the role of objectivity in creating and using invariant scales. In addition to this connection between measurement and invariance, science relies on discovering explanations that reflect stable relationships between variables. We seek invariant relationships across various conditions. Explanatory Item Response Models (EIRMs) can provide an approach for linking measurement and explanations based on principles of invariance. The essential principles for the next generation of invariant measurement include the following:
- Constructs should be unidimensional and defined by latent variables,
- the measurement of persons should be invariant across items,
- item calibrations should be independent of specific persons, and finally,
- structural analyses should seek invariance in relationships among variables.
Next generation invariant measurement should provide the integration of measurement theory with explanatory models to deepen our understanding, and to discover invariant relationships in the human sciences.
Title: The Road Ahead: Future Challenges and Opportunities in Objective Measurement
This plenary session explores the integration of Artificial Intelligence (AI) with Rasch-based assessment and measurement, examining both opportunities and challenges across various domains. We will discuss how AI can enhance Rasch applications, improving scalability, efficiency, and precision in large-scale assessments through advancements like automated item generation and adaptive testing. However, ethical concerns such as algorithmic bias and data privacy will be addressed, emphasising the need to balance AI benefits with fair and accountable practices. The interdisciplinary nature of this field will be highlighted, stressing collaboration between psychometricians, researchers, and AI experts. We will explore the evolving skills required and challenges in automating Rasch analyses while maintaining measurement integrity. By fostering dialogue on the future of AI in Rasch-based measurement and assessment, we aim to chart a course for responsible innovation, enhancing our understanding of human development and performance through robust, objective measurement while navigating the complexities of AI implementation.
Title: Rasch Measurement for Rater-mediated Assessment
Rating scales are widely used for human judgments across the social, behavioral, and health sciences, from high-stakes performance assessments in education and personnel evaluations to functional assessments in medical research. This workshop applies principles of invariant measurement and lens models from cognitive psychology to examine judgment processes in rater-mediated assessments, focusing on creating, evaluating, and maintaining invariant systems (Engelhard & Wang, 2024). We introduce rater-mediated assessments such as performance assessments, demonstrating how Rasch models can provide item-invariant person measurement and person-invariant item calibration. Building on these foundations, the workshop explores the Many-Facet Rasch Model for developing robust performance assessments, illustrated by large-scale writing examples. Participants are encouraged to bring their own data for hands-on analysis and discussion, and will gain practical strategies for practices on using rating scales. Throughout the workshop, the Facets software (Linacre, 2024) will exemplify these principles, supporting the implementation of rating scales that yield reliable and meaningful human judgments.

Professor Jue Wang
University of Science and Technology of China
Jue Wang, PhD, is currently a professor in Department of Psychology at The University of Science and Technology of China. Dr Wang received her PhD in Quantitative Methodology Programme under Educational Psychology at The University of Georgia, and previously worked in Research, Measurement & Evaluation Program at The University of Miami. Her research focuses on examining rater effects in rater-mediated assessments, such as writing assessments and creativity assessments, using Rasch measurement models and unfolding models. She has published in peer-reviewed journals including Educational Psychology Review, Psychology of Aesthetics, Creativity, and the Arts, Educational and Psychological Measurement, Journal of Educational Measurement, and Assessing Writing. Dr Wang has co-authored two books (with Professor George Engelhard): Rasch models for solving measurement problems: Invariant measurement in the social sciences published by Sage as part of Quantitative Applications in the Social Sciences (QASS) series, and Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences (2nd edition) by Routledge.
Title: Addressing Human Scoring in Subjective Creativity Assessments
This talk provides an in-depth examination of human-based scoring methods for subjective creativity assessments (SCA). It begins with an overview of the PISA 2022 framework on creative thinking, followed by two empirical studies exploring various scoring techniques. The first study illustrates the use of a partial credit model to identify rater effects and rating scale malfunctioning in expert scoring methods, while also examining how rater judgments are influenced by features of creative responses. The second study explores peer scoring method as an alternative approach for classroom creativity assessments. This talk primarily emphasizes the complexities and challenges of human judgment in evaluating creativity, offering insights into improving the reliability and validity of creativity assessments, as well as the development of automated scoring methods.
Title: Rasch Measurement for Rater-mediated Assessment
Rating scales are widely used for human judgments across the social, behavioral, and health sciences, from high-stakes performance assessments in education and personnel evaluations to functional assessments in medical research. This workshop applies principles of invariant measurement and lens models from cognitive psychology to examine judgment processes in rater-mediated assessments, focusing on creating, evaluating, and maintaining invariant systems (Engelhard & Wang, 2024). We introduce rater-mediated assessments such as performance assessments, demonstrating how Rasch models can provide item-invariant person measurement and person-invariant item calibration. Building on these foundations, the workshop explores the Many-Facet Rasch Model for developing robust performance assessments, illustrated by large-scale writing examples. Participants are encouraged to bring their own data for hands-on analysis and discussion, and will gain practical strategies for practices on using rating scales. Throughout the workshop, the Facets software (Linacre, 2024) will exemplify these principles, supporting the implementation of rating scales that yield reliable and meaningful human judgments.

Associate Professor Vahid Aryadoust
Nanyang Technological University, Singapore
Vahid Aryadoust, PhD, is an Associate Professor of Language Assessment at the National Institute of Education, Nanyang Technological University (NTU), Singapore. He teaches graduate and doctoral courses on generative artificial intelligence in language assessment and research methodology, while serving as the Research Program Leader in his department and supervising Master’s and PhD students. His research spans sensor technologies such as eye tracking and neuroimaging in language assessment, generative AI applications, meta-analysis, and scientometrics, with extensive publications in these areas. A multi-award-winning scholar, Dr Aryadoust and his team received the International Language Testing Association’s (ILTA) Best Article Award in 2024 for their groundbreaking paper on the application of sensor technologies in listening assessment. He also runs a YouTube channel, Statistics and Theory, which promotes open access to knowledge and science (https://m.youtube.com/@VahidAryadoust).
Title: What Can Generative AI Do for Assessing Listening and Speaking Skills
In this talk, I draw on the forthcoming book Assessing Listening in the Age of Generative Artificial Intelligence (Aryadoust, 2025) to discuss how AI technologies can be used for listening and speaking assessment. These technologies include foundation models, text-to-speech, and automated speech recognition. Foundation models, such as large language models, are trained on extensive datasets, which enables them to address a wide range of tasks. Text-to-speech and automated speech recognition systems generally function as more specialized components for developing AI-based assessments. When these AI-driven technologies are combined, they can create robust and scalable frameworks for designing language assessment tools and evaluating oral interaction competence. These innovations have great potential for educators, researchers, and practitioners, as they offer a set of integrated tools to address the needs of listening and speaking assessment in a world increasingly shaped by generative AI.

Dr Che Yee Lye
Singapore University of Social Sciences
Che Yee Lye, PhD, is currently Senior Lecturer with Singapore University of Social Sciences (SUSS) where she teaches assessment-related courses and conducts professional development training on AI, measurement and assessment for SUSS faculty and associates. Dr Lye received her PhD (Education) from The University of Adelaide. Her current research focuses on adaptive testing and learning using Rasch measurement models, AI and assessment, as well as language curriculum and assessment. Before joining SUSS, she served as a Senior Curriculum Developer at the United Chinese School Committees’ Association of Malaysia, where she researched English language curriculum and evaluation, published educational materials, and conducted teacher training. She was also a Research Specialist at the Singapore Examinations and Association Board, focussing on assessment for learning as well as computerised and multistage adaptive testing for primary school Mathematics and English. In her current role with SUSS, she leads the AdLeS Research Group (ARG) in developing the Adaptive Learning System (AdLeS). Since 2021, AdLeS has served over 2,700 students, enabling instructors to monitor progress and identify students needing support, and providing students with meaningful feedback on learning. She is also Chair of Special Interest Group on Generative AI & Learning, Teaching and Assessment (SIG-AILTA) and co-manages https://sigailta.com, a blog dedicated to sharing ideas and resources on pedagogical and assessment practices in the era of generative AI.
Title: Refocusing Educational Measurement: Understanding Student Learning through Adaptive Learning
Adaptive learning is a promising technology that is transforming higher education by personalising instruction to meet the diverse needs of adult learners. This technology leverages data-driven algorithms and continuous assessment to tailor content and pace to individual needs, enhancing learning outcomes. Traditional methods of assessment and measurement tend to focus on the starting and ending performance points, neglecting progression of a learner’s development. Furthermore, a significant tension exists between using assessment data for institutional accountability and improvement in learning and teaching. I will argue that our adaptive learning pedagogy provides an opportunity to adopt a broader approach to measuring student progress by prioritising learning processes and growth, and integrating human-machine collaboration in both the development of adaptive learning content and the nuanced interpretation of learner data.

Professor Zi Yan
The Education University of Hong Kong
Professor Zi Yan, PhD, is a Hong Kong RGC Senior Research Fellow and serves as the Head of the Department of Curriculum and Instruction at The Education University of Hong Kong. Additionally, he holds the title of Honorary Professor at the Centre for Research in Assessment and Digital Learning at Deakin University. His research and publications primarily focus on two areas: educational assessment in both school and higher education contexts, with a particular emphasis on student self-assessment, and Rasch measurement, specifically its application in educational and psychological research. He is also the co-author of the book ‘Applying the Rasch model: Fundamental measurement in the human sciences (4th ed.)’.
Title: The Road Ahead: Future Challenges and Opportunities in Objective Measurement
This plenary session explores the integration of Artificial Intelligence (AI) with Rasch-based assessment and measurement, examining both opportunities and challenges across various domains. We will discuss how AI can enhance Rasch applications, improving scalability, efficiency, and precision in large-scale assessments through advancements like automated item generation and adaptive testing. However, ethical concerns such as algorithmic bias and data privacy will be addressed, emphasising the need to balance AI benefits with fair and accountable practices. The interdisciplinary nature of this field will be highlighted, stressing collaboration between psychometricians, researchers, and AI experts. We will explore the evolving skills required and challenges in automating Rasch analyses while maintaining measurement integrity. By fostering dialogue on the future of AI in Rasch-based measurement and assessment, we aim to chart a course for responsible innovation, enhancing our understanding of human development and performance through robust, objective measurement while navigating the complexities of AI implementation.

Associate Professor Jeffrey Durand
Toyo Gakuen University
Jeffrey Durand is an associate professor in the Faculty of Global Communication at Toyo Gakuen University in Tokyo, Japan. His background is in language education and language testing, especially rater-mediated assessment using many-facet Rasch measurement. He also has research interests in motivation, student study abroad, global mindset, and other intercultural issues. Finally, he teaches a number of courses related to global issues and globalisation.
Title: The Road Ahead: Future Challenges and Opportunities in Objective Measurement
This plenary session explores the integration of Artificial Intelligence (AI) with Rasch-based assessment and measurement, examining both opportunities and challenges across various domains. We will discuss how AI can enhance Rasch applications, improving scalability, efficiency, and precision in large-scale assessments through advancements like automated item generation and adaptive testing. However, ethical concerns such as algorithmic bias and data privacy will be addressed, emphasising the need to balance AI benefits with fair and accountable practices. The interdisciplinary nature of this field will be highlighted, stressing collaboration between psychometricians, researchers, and AI experts. We will explore the evolving skills required and challenges in automating Rasch analyses while maintaining measurement integrity. By fostering dialogue on the future of AI in Rasch-based measurement and assessment, we aim to chart a course for responsible innovation, enhancing our understanding of human development and performance through robust, objective measurement while navigating the complexities of AI implementation.

Dr Mohd Zali Mohd Nor
Newstar Agencies Sdn. Bhd.
Dr Mohd Zali Mohd Nor is an I.T. Manager in a Malaysian shipping agency, managing a team of analysts and developers to develop and maintain world-wide enterprise shipping solutions. He received his B.Sc. in Mathematics and Computing from The University of Michigan, Ann Arbor, MI, USA, in 1988, Master of Management in I.T. from Universiti Putra Malaysia in 2005, and PhD in Management Information System from Universiti Putra Malaysia in 2012.
His involvement in Psychometric and Rasch Measurement Models started in 2009, specialising in Rating Scale, Partial Credit and Multi-Facet Rasch models (MFRM). He is currently active in providing trainings and academic consultations on Rasch measurement and has assisted many postgraduate students from various local and international universities on research methodology and Rasch analyses. He has also served as psychometrician in several assessment projects with the Department of Education Malaysia, Fire and Rescue Department and National Child Development Research Center (NCDRC).
He is currently the Vice President of the Pacific Rim Objective Measurement Symposium (PROMS), a committee member of Malaysian Psychometric Association (MPA) and Vice President of myRasch.
Title: The Road Ahead: Future Challenges and Opportunities in Objective Measurement
This plenary session explores the integration of Artificial Intelligence (AI) with Rasch-based assessment and measurement, examining both opportunities and challenges across various domains. We will discuss how AI can enhance Rasch applications, improving scalability, efficiency, and precision in large-scale assessments through advancements like automated item generation and adaptive testing. However, ethical concerns such as algorithmic bias and data privacy will be addressed, emphasising the need to balance AI benefits with fair and accountable practices. The interdisciplinary nature of this field will be highlighted, stressing collaboration between psychometricians, researchers, and AI experts. We will explore the evolving skills required and challenges in automating Rasch analyses while maintaining measurement integrity. By fostering dialogue on the future of AI in Rasch-based measurement and assessment, we aim to chart a course for responsible innovation, enhancing our understanding of human development and performance through robust, objective measurement while navigating the complexities of AI implementation.

Professor Quan Zhang
Jiaxing University, Zhejiang, China
The World Sports University, Macau SAR, China
Professor Zhang Quan, PhD, a PROMS board member, a Professor of Jiaxing University/the World Sports University, Macau, China. Since 1989, he has been involved in test equating for large-scale, high-stakes language assessments. He currently serves as China representative of PROMS, editor of PROMS conference Proceedings, and reviewer of several prestigious academic journals. Ever since 2012, he has been organizing or helping organize PROMS conferences. During the global fighting against COVID-19 pandemic period, he organised a small team of qualified computer engineers and testing professionals to have developed Rasch-GZ, the Rasch-based software ad hoc for item analysis and test equating.
Title: The Road Ahead: Future Challenges and Opportunities in Objective Measurement
This plenary session explores the integration of Artificial Intelligence (AI) with Rasch-based assessment and measurement, examining both opportunities and challenges across various domains. We will discuss how AI can enhance Rasch applications, improving scalability, efficiency, and precision in large-scale assessments through advancements like automated item generation and adaptive testing. However, ethical concerns such as algorithmic bias and data privacy will be addressed, emphasising the need to balance AI benefits with fair and accountable practices. The interdisciplinary nature of this field will be highlighted, stressing collaboration between psychometricians, researchers, and AI experts. We will explore the evolving skills required and challenges in automating Rasch analyses while maintaining measurement integrity. By fostering dialogue on the future of AI in Rasch-based measurement and assessment, we aim to chart a course for responsible innovation, enhancing our understanding of human development and performance through robust, objective measurement while navigating the complexities of AI implementation.
Title: Rasch-GZ: Item Analysis and Test Equating in the Age of AI
The purpose of this workshop is to introduce important applications of Rasch model to language testing: Item Analysis and Test Equating via Rasch-GZ. The workshop falls into three parts.
- Introduction to Rasch-GZ;
- Demonstration of Rasch-GZ and
- Q & A.
Participants needn’t have particular psychometric competence. Just bring their own laptop computers to download Rasch-GZ, free of charge, to install in their computers for future use. No other pre-requisites or requirements.
For more detailed features of Rasch-GZ, please click the following link: https://doi.org/10.2991/978-94-6463-494-5_25
For more info about PROMS conference, you are welcome to visit https://atlantis-press.com/proceedings/proms-23

Dr Rassoul Sadeghi
Australian Curriculum, Assessment and Reporting Authority
Dr Rassoul Sadeghi is a Lead Psychometrician at Australian Curriculum, Assessment and Reporting Authority (ACARA). He has been with ACARA since 2015. Before joining ACARA, he worked as a Senior Psychometrician at Educational Assessment Australia (EAA). In his current position, he is responsible for psychometric aspects of the National Assessment Program in Numeracy and Literacy (NAPLAN online). He has more than 25 years of expertise as a Psychometrician, specialising in the following areas:
- Management and analysis of large and complex data sets
- Test equating and scaling using the Rasch measurement model
- Development and maintenance of item bank
- Designing both ‘low stake’ and ‘high stake’ online assessment programs
- Designing and implementation of adaptive testing (CAT and MST)
- Resolving measurement issues emerging from live testing situations
Title: Item Banking and Adaptive Testing
This workshop is tailored for all participants eager to delve into advanced concepts in educational measurement. It offers a comprehensive introduction to item banking, emphasising the design and management of test items enriched with detailed metadata. Participants will explore how adaptive testing leverages item banks to tailor assessments dynamically, aligning item difficulty with test-takers’ abilities to enhance accuracy and efficiency. The workshop will underscore the pivotal role of item banking as the foundation for effective adaptive testing, ensuring both precision and fairness. Furthermore, it will cover test equating using the Rasch measurement model, illustrated through examples from prominent large-scale assessments like NAPLAN. Ideal for participants aiming to advance their expertise, this session bridges theory and practice in modern assessment design.

Dr Iris Lee
Singapore Ministry of Education
Dr Iris Lee is an education professional with a PhD completed in 2007. Her career spans teaching in Singapore and Hong Kong, followed by roles at Singapore’s Ministry of Education (MOE) and a secondment to NIE/NTU. A longstanding member of the Pacific Rim Objective Measurement Society (PROMS), she first attended their conference in 2007, engaging with experts like Prof Mike Linacre and the late Prof Wang Wenchong. In her current role at MOE, Dr Iris Lee specialises in survey development, data analysis, and research-related tasks, applying her expertise to shape educational policies and practices in Singapore’s education system.
Title: The Road Ahead: Future Challenges and Opportunities in Objective Measurement
This plenary session explores the integration of Artificial Intelligence (AI) with Rasch-based assessment and measurement, examining both opportunities and challenges across various domains. We will discuss how AI can enhance Rasch applications, improving scalability, efficiency, and precision in large-scale assessments through advancements like automated item generation and adaptive testing. However, ethical concerns such as algorithmic bias and data privacy will be addressed, emphasising the need to balance AI benefits with fair and accountable practices. The interdisciplinary nature of this field will be highlighted, stressing collaboration between psychometricians, researchers, and AI experts. We will explore the evolving skills required and challenges in automating Rasch analyses while maintaining measurement integrity. By fostering dialogue on the future of AI in Rasch-based measurement and assessment, we aim to chart a course for responsible innovation, enhancing our understanding of human development and performance through robust, objective measurement while navigating the complexities of AI implementation.