Professional knowledge and skills are what separate someone who knows instructional design from someone who can actually do it in the real world. Standard 4 captures the aspects of practice that textbooks cannot fully teach: how to work productively with other people toward a shared goal, how to lead a team without overpowering it, how to look honestly at your own work and grow from what you find, and how to design assessment systems that genuinely measure what they claim to measure. These are the competencies that only develop through experience, through projects that had real stakes, real teammates, and real feedback.
The four artifacts I chose for this standard each represent one of those lived experiences. My project manager's report from EME 6284 documents a collaborative instructional design project in which I served as team lead, responsible not only for my own contributions but for observing, supporting, and formally evaluating the work of four peers. My facilitation plan from the Project Management course captures the leadership dimension of my practice: a structured, research-informed plan for guiding a multi-stakeholder training initiative through its most complex communication and decision-making moments. My Big Data final reflection paper shows what reflective practice actually looks like when it goes deep: not a summary of what happened, but a rigorous analysis of what surprised me, what changed my thinking, how I applied it to real data from my own professional context, and what I still want to learn. And my online course evaluation rubric demonstrates my ability to design an assessment instrument from the ground up, grounded in the literature and built to measure what actually matters in an online learning environment.
Together, these artifacts represent the version of me that exists outside the classroom, the practitioner who collaborates, leads, reflects, and evaluates with intention.
Candidates collaborate with their peers and subject matter experts to analyze learners, develop and design instruction, and evaluate its impact on learners.
Course: EME 6284 Problems in Instructional Design for Computers
Date: Fall 2024
Artifact: Project Manager Report on Group Member Contributions
Role: Sole Creator
Project Type: Assignment
This artifact is a formal project manager report submitted as part of a collaborative instructional design group project in EME 6284: Problems in Instructional Design for Computers. As the designated project manager for Group 4, the role required coordinating the work of four teammates — Lora Crider, Elizabeth Lemelin, Derek Hemenway, and Davidson Pierre — across a multi-phase instructional design project that included developing a Training Curriculum Plan, an Instructional Design Plan (IDP), and a Final Project with individual units of instruction.
The report documented each team member's contributions across all three project phases, capturing specific task ownership (e.g., who authored which lesson plans, which IDP sections, and which final project units), attendance and communication patterns, collaboration style, and overall performance. Ratings were assigned on a 1 to 5 scale using a structured rubric ranging from 'Not Contributing' to 'Outstanding Contribution.' The report required both objectivity and nuance: one team member received a rating of 5 for consistently exceeding expectations and supporting others, while another received a 3 following documented instances of requiring additional guidance and leaning on the team during a challenging period. The report was submitted confidentially to the course instructor, Dr. Park.
2. Plan and Monitor Training Projects
Managed a multi-phase instructional design project across four collaborators, tracking contributions at each phase (Training Curriculum Plan, IDP, Final Project), monitoring participation and communication patterns, and formally documenting outcomes — the core activities of project planning and monitoring.
5. Perform Job/Task/Content Analysis
Analyzed the specific tasks, subtasks, and deliverables required of each project phase and assigned them across the team based on capacity and expertise, demonstrating the ability to break complex instructional design work into structured, manageable components.
10. Develop Training Program Materials
Contributed directly to the Final Project (co-authoring Unit 4.1 and Unit 4.2) while simultaneously managing the team, demonstrating the ability to produce instructional design deliverables as both a practitioner and a project leader.
12. Evaluate Instruction, Program, and Process
Applied a structured contribution rating rubric to evaluate each team member's performance across specific criteria — not simply offering impressions, but grounding evaluations in observed behaviors, documented contributions, and consistent standards.
Candidates lead their peers in designing and implementing technology-supported learning.
Course: EME 6235 Technology Project Management
Date: Summer 2025
Artifact: Facilitation Plan
Role: Sole Creator
Project Type: Assignment
This artifact is a formal facilitation plan developed as part of the Channel Partner Technical Training and Certification Program project in the Project Management course. The plan defined both the structural and behavioral dimensions of managing the project's meetings, decisions, and stakeholder relationships throughout the initiative. It addressed five interconnected areas: sync structure, facilitation techniques, my role as facilitator, pre-meeting preparation, and partner-facing preparation.
The sync structure established four meeting cadences: weekly project team syncs (15 to 20 minutes), biweekly stakeholder alignment meetings, monthly executive milestone reviews, and ad hoc escalation calls for blockers that threaten the project timeline. In between meetings, real-time facilitation was maintained through Slack in the dedicated '#channel-training-build' channel. The facilitation techniques section outlined specific behavioral strategies for maintaining meeting effectiveness: using round-robin participation to ensure quieter members had space to contribute, maintaining psychological safety by asking clarifying questions without assigning blame, keeping discussions grounded in data and scope, time-boxing conversations to prevent drift, and documenting decisions in real time to prevent ambiguity. The plan also addressed how to manage conflict and ambiguity, restating issues in neutral language, validating each perspective, and breaking complex problems into two or three focused decision points. Stakeholders were explicitly identified across five functional groups: Sales Enablement, Channel Operations, Channel Marketing, SME and Channel Solutions Architects, and Partner stakeholders.
2. Plan and Monitor Training Projects
Established a tiered, cadenced meeting structure (weekly, biweekly, monthly, ad-hoc) with designated tools, agenda protocols, and decision documentation practices, the governance infrastructure required to sustain a complex training project through implementation.
5. Perform Job/Task/Content Analysis
Identified and mapped all five stakeholder groups with their specific roles and weekly time commitments, reflecting a thorough analysis of who needed to be involved in the project and in what capacity at each phase.
10. Develop Training Program Materials
Produced the facilitation plan itself as a formal project governance deliverable, demonstrating the ability to create structured documentation that guides implementation rather than simply describing what was designed.
11. Prepare End-Users for Implementation
Developed a dedicated partner-facing preparation protocol, including FAQs, escalation paths, a dry run of the partner LMS certification flow, and coordination with Channel Marketing for launch communications, demonstrating readiness to prepare external stakeholders, not just internal teams.
Candidates analyze and interpret data and artifacts and reflect on the effectiveness of the design, development and implementation of technology- supported instruction and learning to enhance their professional growth.
Course: Course: EME 6356 Big Data in Education
Date: Summer 2025
Artifact: Big Data Final Reflection Paper
Role: Sole Creator
Project Type: Final Paper
This artifact is a final reflection paper submitted for EME 6356: Big Data in Education, in which the author reflected on key conceptual surprises, practical techniques, discussion experiences, areas for further learning, and the impact of the course project on professional practice. The paper demonstrated a level of intellectual engagement that went well beyond summarizing course content — it traced how specific readings and experiences had genuinely shifted the author's understanding of data, ethics, and the role of analytics in learning design.
Three areas of genuine surprise were documented. First, the discovery that the real value of learning analytics lies not in dashboards but in 'closing the loop', connecting data insights to timely human actions such as micro-grants, outreach, and policy changes, as illustrated by the University of South Florida's predictive analytics program. Second, the unexpected time cost of data preparation, the Forbes finding that data scientists spend the majority of their time cleaning and organizing data was validated firsthand when merging Highspot training exports revealed that name fields, timestamps, and status codes could 'quietly derail an otherwise simple question.' Third, a reconceptualization of data ethics: prior to the course, privacy was understood as the primary concern, but Johnson's (2017) argument that algorithms can 'launder' bias, making historical inequities appear objective, reshaped the author's understanding of data work as fundamentally a justice issue. These insights were applied directly to a professional project: an analysis of onboarding assessment data for sales representatives at a cybersecurity company, which produced operational recommendations including automating a manager sign-off backlog, time-boxing the assessment, and establishing a weekly completion digest for managers.
1. Perform a Needs Assessment
The Highspot data analysis functioned as an applied needs assessment: identifying a process gap (manager sign-off bottleneck), documenting its impact on learner outcomes (delayed completion and revenue entry), and recommending targeted interventions grounded in evidence.
8. Recommend Instructional Strategies
The final section of the paper explicitly connected data insights to instructional and operational changes: time-boxing the assessment, automating workflow steps, and creating a manager feedback loop, demonstrating the ability to move from analysis to actionable instructional improvement.
9. Develop Performance Measurement Instruments
Critically evaluated the design of analytics instruments themselves, reflecting on which visualization types (stacked bars, box plots, scatterplots) best serve specific analytical questions, and on how survey instruments and data collection tools must be designed to avoid encoding bias.
12. Evaluate Instruction, Program, and Process
Applied a data-driven evaluation lens to real onboarding training data at a cybersecurity company — analyzing assessment completion times, score relationships, and time-to-first-deal metrics — and translated those findings into three specific, actionable operational recommendations.
Candidates design and implement assessment and evaluation plans that align with learning goals and instructional activities.
Course: EME 6456 Online Teaching Methods
Date: Fall 2025
Artifact: Evaluation Rubric
Role: Sole Creator
Project Type: Instructional Video
This artifact is a comprehensive online course evaluation rubric created as a culminating assignment in the Online Learning course. The rubric was designed to provide a structured, evidence-based framework for evaluating the quality of online courses across six essential dimensions: Course Organization and Navigation (10 points), Learning Objectives and Alignment (8 points), Instructional Materials and Content Quality (6 points), Interactivity and Learner Engagement (6 points), Technology, Accessibility, and Universal Design (5 points), and Assessment and Feedback (5 points), totaling 40 possible points. Each category includes three performance-level descriptors: Needs Improvement, Meets Expectations, and Exemplary, enabling evaluators to assess courses consistently and comparably.
The rubric was grounded in major frameworks from the online learning literature, including Quality Matters standards, Chickering and Gamson's Seven Principles for Good Practice in Undergraduate Education, and Universal Design for Learning (UDL) guidelines. The accompanying written explanation articulated the reasoning behind each category's inclusion and weighting, connecting design decisions to specific course experiences: hands-on course creation during the semester, peer course review activities, and engagement with readings on cognitive overload, backward design, and the centrality of assessment to online learning. Course Organization received the highest weighting (10 points) because, as the author noted, 'without intuitive organization, even strong content becomes difficult to access', the digital structure functions as the classroom itself in an online
environment.
6. Write Performance-Based Objectives
Each rubric category translates broad quality principles (alignment, engagement, accessibility) into specific, observable behaviors and evidence that evaluators can look for — mirroring the objective-writing process of moving from abstract goals to concrete, measurable criteria.
8. Recommend Instructional Strategies
The rubric's Interactivity and Engagement category explicitly evaluates the presence of specific instructional strategies, gamification, scenario-based learning, peer discussion, and simulation, reflecting a pedagogical stance that online learning requires intentional, diverse engagement design.
9. Develop Performance Measurement Instruments
Designed a six-category, three-level evaluation rubric with descriptive performance criteria at each level, demonstrating the ability to construct a valid, reliable instrument for assessing instructional quality in a way that is both grounded in theory and practical for evaluators to apply.
12. Evaluate Instruction, Program, and Process
Applied the rubric in practice through peer course review activities during the semester, using it to evaluate classmates' online courses and providing structured feedback grounded in the criteria, completing the full evaluate-design-apply cycle.