Research has always felt to me like the most honest part of the learning design process. Design decisions can be made on intuition, lived experience, or professional judgment, and often are, but research is what separates a hunch from a claim. Standard 5 has asked me to engage with research not as an abstract academic exercise, but as a genuine disciplinary practice: to understand theoretical frameworks, to apply rigorous methodology, to use data ethically, and to communicate findings in ways that are both accurate and useful. Over the course of this program, that has meant moving from being a consumer of research to being someone who can critically interrogate it, design studies grounded in it, and contribute original analysis.
The three artifacts I selected for this standard each represent a different dimension of that development. My Task 2 assignment from EDF 6481 demonstrates theoretical foundations and methodological literacy. I designed a quasi-experimental mixed-methods research study from the ground up, identifying variables, justifying a research design, and operationalizing constructs like self-efficacy and emotional response into measurable instruments. My Final Motivational Research Proposal Presentation from EME 6419 demonstrates the ability to synthesize and apply research-based frameworks, drawing on ARCS, Self-Determination Theory, and the Motivation Research Scales to build an audience-centered, theoretically grounded instructional strategy with a formal evaluation plan.
And my Big Data Final Paper from EME 6356 demonstrates applied research in practice, using real workforce learning data from my own professional environment to formulate questions, conduct analysis, and generate evidence-based recommendations, all while grappling with the ethical dimensions of working with organizational data.
Together, these artifacts reflect a practitioner who takes research seriously, not as a box to check, but as the intellectual infrastructure that makes the rest of the work worth doing.
Candidates demonstrate foundational knowledge of the contribution of research to the past and current theory of educational communications and technology.
Course: EDF 6481 Foundations in Educational Research
Date: Spring 2026
Artifact: Task 2: Variables
Role: Sole Creator
Project Type: Assignment
This artifact is a formal research design assignment from EDF 6481: Research Methods in Education in which the author designed a quasi-experimental mixed-methods study examining the relationship between AI roleplay simulation engagement and sales professionals' learning outcomes. The assignment required identifying and defining all research variables, justifying the study's design, and articulating how each variable would be operationalized and measured.
The study was designed around a central independent variable, degree of engagement with an AI roleplay simulation platform, measured through platform analytics including session frequency, duration, scenario completion rate, and objection-handling attempts. Two dependent variables were identified: self-efficacy (measured via Likert-scale pre/post surveys using adapted items from Bandura's self-efficacy framework) and skill performance (measured through rubric-based assessments and AI-generated scoring of roleplay transcripts). A mediating variable, emotional response to the AI interaction, was also included and operationalized through Likert items and open-ended written reflections following each simulation session. The analysis plan called for thematic coding of qualitative data, paired t-tests to measure pre-/post-self-efficacy shifts, and multiple regression to examine the relationship between engagement levels and skill performance outcomes while controlling for prior experience.
1. Perform a Needs Assessment
The research question itself emerged from an identified performance gap: the need to understand whether AI roleplay tools actually improve self-efficacy and skill performance in sales training, rather than assuming they do, grounding the study in a genuine organizational and instructional need.
9. Develop Performance Measurement Instruments
Operationalized two dependent variables into concrete measurement instruments: a Likert-scale pre/post self-efficacy survey adapted from Bandura's framework and a rubric-based skill performance assessment complemented by AI-scored transcript analysis.
12. Evaluate Instruction, Program, and Process
The study's evaluation plan, pairing quantitative pre/post testing with qualitative emotional response data and platform analytics, reflects a sophisticated understanding of multi-source evaluation design, in which no single data point is treated as sufficient evidence on its own.
13. Apply Research and Evaluation
Designed a full quasi-experimental mixed-methods study with clearly defined independent, dependent, and mediating variables, an operationalization plan for each construct, and a multi-method analysis approach — demonstrating the ability to apply research design principles to a real instructional question.
Candidates apply formal inquiry strategies in assessing and evaluating processes and resources for learning and performance.
Course: EME 6491 Motivational Design
Date: Summer 2025
Artifact: Final Motivational Research Presentation
Role: Sole Creator
Project Type: Assignment
This artifact is a nine-slide research proposal presentation created and delivered as a recorded YouTube video for EME 6419: Motivation, Volition & Performance. The presentation proposed a motivational intervention for Veracode's Account Risk Management (ARM) onboarding training program, grounded in three theoretical frameworks: Keller's ARCS model of motivational design (Attention, Relevance, Confidence, Satisfaction), Self- Determination Theory (SDT), which focuses on autonomy, competence, and relatedness as drivers of intrinsic motivation, and the Motivation Research Scales (MRS), used to operationalize and measure motivational constructs.
The presentation opened with an audience analysis, including a visual breakdown of prior training experience across the Veracode ARM team, and used that analysis to justify selecting motivational strategies tailored to a specific learner profile: experienced sales professionals with high technical competence but low familiarity with formal training formats. The strategy section was organized into a three-phase tactic table covering beginning-of-course, during-course, and end-of-course motivational interventions, each mapped to one or more ARCS categories and SDT needs. For example, relevance at the start was addressed through scenario-based warm-ups using real Veracode deal data; confidence during the course was supported through scaffolded modules with branching scenarios; and satisfaction at the end was reinforced through leaderboards and peer
recognition. The final section proposed a mixed-methods evaluation plan using MRS survey data, engagement analytics, and qualitative interview data to assess whether the motivational interventions had the intended effect.
1. Perform a Needs Assessment
Conducted and visualized an audience analysis of the ARM team's prior training experience to ground the proposal in learner-specific evidence, rather than designing generically — ensuring the motivational strategies matched the actual profile and needs of the target learners.
5. Perform Job/Task/Content Analysis
Analyzed the specific performance context of ARM onboarding (technical sales, high complexity, low formal training exposure) to select frameworks and strategies that aligned with the demands of the job role, not just general motivational principles.
8. Recommend Instructional Strategies
Translated motivational theory into specific, phase-by-phase instructional tactics: scenario-based warm-ups for relevance, scaffolded branching scenarios for confidence, and gamification elements for satisfaction, demonstrating the ability to derive actionable design decisions from theoretical frameworks.
13. Apply Research and Evaluation
Proposed a mixed-methods evaluation plan using the Motivation Research Scales (a validated instrument), platform engagement analytics, and qualitative interview data, demonstrating the ability to design an evaluation approach grounded in formal research methodology and appropriate for the complexity of motivational outcomes.
Candidates conduct research and practice using accepted professional and institutional guidelines and procedures.
Course: EME 6356 Big Data in Education
Date: Fall 2025
Artifact: Big Data in Educaiton Final Paper
Role: Sole Creator
Project Type: Final Paper
This artifact is the final research paper for EME 6356: Big Data in Education, in which the author conducted an original data analysis examining the relationship between training and certification engagement and sales performance outcomes for account executives at Veracode, a cybersecurity company. The study used two primary data sources: Highspot LMS export data (tracking completion of onboarding training modules and time-to-completion) and Salesforce CRM data (capturing pipeline value and quota attainment). A third variable, VRM certification status, was incorporated from internal HR records to test whether earning a professional certification predicted sales performance beyond basic training completion.
The analysis produced four data visualizations: a training completion rate comparison by cohort, a box plot showing pipeline distribution by VRM certification status, a scatterplot of training completion time versus quota attainment, and a stacked bar chart showing the breakdown of module completion by certification tier. Key findings included that certified representatives carried significantly higher pipeline value and showed lower variance in quota attainment than non-certified peers, that early training completion (within the first 30 days) was associated with faster ramp-to-revenue, and that a manager sign-off bottleneck was adding an average of 11 days to the completion timeline for onboarding assessments. The paper concluded with three actionable recommendations: implement mandatory VRM certification as a formal milestone in the onboarding path, automate the manager sign-off workflow to remove the administrative bottleneck, and introduce a structured 30- day onboarding sprint with gamification and leaderboard elements to encourage early and consistent completion.
8. Recommend Instructional Strategies
Translated analysis findings into three specific instructional and operational recommendations: mandatory certification milestones, automated workflow improvements, and a gamified 30-day onboarding sprint, demonstrating the ability to move from data to design.
9. Develop Performance Measurement Instruments
Designed and applied a multi-source measurement framework, combining LMS completion metrics, CRM outcome data, and certification status as independent indicators of training effectiveness, rather than relying on a single metric, demonstrating measurement sophistication and awareness of the limitations of any one data source.
12. Evaluate Instruction, Program, and Process
Used the data analysis to formally evaluate the effectiveness of the Veracode ARM onboarding program: identifying what was working (early completion predicted faster ramp), what was not (the manager sign-off bottleneck), and what was missing (a structured certification pathway), completing the evaluate-design-recommend cycle.
13. Apply Research and Evaluation
Conducted an original multi-source data analysis combining LMS completion data, CRM performance data, and certification records to answer a substantive organizational question, applying research methodology to real workforce learning data and producing evidence-based conclusions.