Harrison Forsyth
Portfolio
3D Scanning: Photogrammetry, LiDAR, Neural Rendering (NVIDIA Instant NeRF)
Coding Languages: C #, C++, Python, HTML
Real-time Engines/VR Development: Unity 3D, Unreal Engine (4 and 5)
AI Development: Integration of LLMs and multi-modal AI solutions with real-time engines - AI conversational agents. integration of
GIS: Map making, Plotting, Scripting, Data management, ArcGIS, QGIS
Brain Computer Interface: Brainwave driven applications, Bio-metrics, Neuroscience
Interests: VR/3D Simulation and Teaching, AI Conversational Agents, Constructionist Teaching, Gamification, Experiential Learning, Ancient Languages, Roman Epigraphy, Socio-Spatial Dynamics of Domestic Space, and the Ancient Family.
Core skills and interests
Innovation philosophy
My development philosophy when creating VR programs, conducting 3D scans, making GIS maps, AI conversational agents, or even making a simple database is to ensure that the use-case of the technology always addresses or solves a problem in the research. This can be best summarized by the phrase “don’t create solutions for problems that don’t exist”. As a developer and researcher, I live by this statement. For example, 3D scanning is great for archaeology because excavation is a destructive process and with 3D scans we can now visualize past excavations in 3D and improve imaging capabilities beyond that of simple photography. Likewise, VR training for archaeology provides a sense of immersion comparable to actually being on-site, without the prohibitive costs of travel and relative inaccessibility of archaeological materials in the classroom. Another use-case for 3D scanning includes preserving ephemeral or changing landscapes of the city. A project recording street art/graffiti, for example, would benefit greatly from 3D scanning and VR visualization because it allows for the entire context of street art to be captured in the form of a digital twin, which can be revisited in 3D. This last example improves upon the limitations of conventional photo archives and provides future scholars with more information about the city. In all of these cases, problems of research have been addressed and research problems have been addressed. In this way, researchers can avoid making a gimmick of technology and provide a proper use-case for their innovative solutions.
Projects
Mixed reality Firefighter Training
Institution: Durham College
Purpose: NSERC funded project with the Durham College Mixed Reality Capture Studio, Durham College Health Sciences, Oshawa Fire services. Involved the development of a mixed reality firefighter training application.
My Role: Principal Investigator and Lead Mixed Reality Developer
Results: Documentary about the research project (coming soon)
Roman city teaching experiment
Institution: York University
Purpose: Pedagogical experiment in which I taught students to design 3D reconstructions of Roman public buildings and present them to the class as an assignment.
My Role: Principal Researcher and Instructor
Results: Published in Journal of Classics Teaching (Link)
AI Conversational Agent SDK (Unity 3D and Unreal)
Institution: Durham College, Office of Research Services Innovation and Entrepreneurship.
Purpose: The creation of an easy to use tool for creating AI driven digital humans in Unity and Unreal. Customizable personalities, motivations, appearances, and voices. Unscripted, fully verbal, dialogue with the user. Ideal for training, entertainment, simulation, and productivity.
My role: Principal Investigator and Developer.
AI Vision Integration
Institution: Durham College MRC Studio, Office of Research Services Innovation and Entrepreneurship.
Industry Partner: Escent Labs Inc.
Purpose: Integration of Open AIs vision model with AI conversational agents in Unity 3D. Allows LLMs to access visual data in mixed reality
My role: Principal Investigator and Technical Lead.
Brainwave Driven Meditation Apps (Brain Computer Interface)
Institution: Durham College MRC Studio, Office of Research Services, Innovation, and Entrepreneurship.
Industry Partner: InteraXon Inc (Muse).
Purpose: Prototyping and developing VR-Based meditative applications using the Muse EEG headband. User experience, core mechanics, and visual effects, visually evoked potentials, biometrics filtering.
My role: Principal Investigator and Technical Lead.
M-Body AI: Generative Character Animation
Institution: Durham College MRC Studio, Office of Reserach Services, Innovation, and Entrepreneurship.
Partners: Sheridan College, Centre de développement et de recherche en intelligence numérique, and Laboratoire en innovation ouverte.
Purpose: Part of a $2M NSERC research grant, M-Body is a generative character animation tool for various industries. LINK
My role: Principal Investigator, motion capture and data collection.
APASQ "Et après" Digital
Institution: York University; ToasterLab; Association des professionnels des arts de la scène du Québec (APASQ)
Purpose: The digitization of APASQ's "Et Après" art exhibit and deployment for public engagement.
My role: Consultant and Developer, photogrammetry, 3D modelling, Blender, Unreal Engine, Web GL.