iQ3Connect Product Release v2025.5.1

Faster, Simpler, and More Intuitive Training Creation

Version 2025.5.1 introduces major usability enhancements designed to make immersive training creation faster and more approachable—while still giving advanced users the flexibility they need. From a redesigned Training Editor to repeatable animations and cleaner trainee interfaces, this release makes it easier than ever to build polished, interactive 3D training experiences.

 

  • A New, Simplified Training Editor: Building training modules is now dramatically easier with our redesigned Training Editor. This new Editor introduces a streamlined, step-based workflow that helps authors create content quickly and confidently.

no code AI powered 3D training builder

 

  • Looping Animations: Animations can now be set to loop automatically, making it simple to create continuous motions such as rotating parts, indicator movements, or background activity. 

 

  • Easier Setup of Selection-Based Interactions: Defining selection-based interactions is now more intuitive. The updated interface for configuring selection triggers allows authors to easily view, organize, and adjust the conditions that move a trainee from one Step to the next. This makes it simpler to create interactive training flows based on object selections or other user actions.

 

Explore iQ3Connect v2025.5.1 to experience faster authoring, clearer interactions, and a more intuitive workflow for creating immersive 3D training.

 

Request a demo to learn more!

iQ3Connect Product Release v2025.4

Work Smarter, Train Faster

Version 2025.4 delivers major advancements in AR authoring, 3D rendering, and training usability—helping organizations create, align, and deploy immersive learning content faster than ever. From marker-based AR alignment to adaptive training behaviors and refined UI design, every improvement in this release is built to make immersive training more intuitive, more realistic, and more accessible across all devices.

 

  • Build Directly in AR: Authoring immersive training modules in AR has never been easier. The entire setup can now be completed directly in AR-mode. This ensures virtual objects stay perfectly aligned with the physical world as you build, giving trainers a true “what-you-see-is-what-you-get” experience.

 

  • Simplified AR Alignment: Speed up setup and ensure perfect spatial alignment between real and virtual assets. With marker-based alignment (available for Android phones and tablets), authors can add and print virtual markers that sync the digital scene to the real world when scanned in AR-mode—enabling fast, accurate, and repeatable placement. Alternatively, with the new surface alignment mode, users can quickly anchor virtual 3D content to real-world surfaces such as desks, walls, or floors—making it simple for end-users to place and view content naturally across all XR-enabled devices.

 

  • Realistic Transparency and Rendering: Experience the most visually accurate version of iQ3Connect yet.

    • Transparent materials now render more accurately for realistic glass-like surfaces and artificial “x-ray” views

    • GLB model colors are more consistent and true-to-life.

    • Reflectivity and shading have been refined for higher realism across devices.

    These updates bring high quality visuals to every training module.

 

  • Device-Adaptive Training Behavior: Training content can now automatically adapt to the user’s device. Authors can control whether specific actions—like loading viewpoints or states—apply to all users or only those in 2D (non-XR) mode. This flexibility ensures the right experience for every device type, from desktop to headset.

 

  • Smarter Animation Authoring: Creating motion within 3D content is now more intuitive. Authors can define rotation pivots and axes visually by selecting points on a model or by selecting the center of a user-defined circle. The result: faster, more accurate, and more natural animations.

 

  • Enhanced Information Tags: Information Tags are now more flexible and expressive:

    • Optional “leader lines” visually connect tags to points on 3D models.

    • Tags can be repositioned and reset using an intuitive movement triad.

    • Tag states (open/closed, position, leader line, etc.) are saved within training states for consistent experiences.

    Together, these upgrades make information tagging clearer and more interactive.

 

Explore iQ3Connect v2025.4 today and see how the latest generation of AR authoring, visualization, and usability tools can accelerate your training transformation.

 

Request a demo to learn more!

iQ3Connect Product Release v2025.3

The latest release of iQ3Connect brings powerful upgrades that make immersive training and virtual collaboration even more intuitive, accessible, and impactful. With expanded AR support for iOS, Gaussian Splat import, new automation tools, and authoring improvements, iQ3Connect 2025.3 is designed to help organizations deliver spatial experiences faster, more easily, and with greater reach than ever before.

 

Training and Work Instructions

 

  • Web-Based Augmented Reality (AR) on iPhone and iPad: Web-based augmented reality (AR) is now available for users with iOS devices such as the iPhone and iPad. Just download the iQ3Connect WebXR Viewer from the App Store to experience your iQ3Connect training modules and virtual experiences in augmented reality. Standard web browsers on iOS devices (such as Safari or Chrome) can still view iQ3Connect content in 2D mode.

 

  • Support for Gaussian Splats: iQ3Connect now supports cutting-edge Gaussian Splat rendering, enabling you to visualize high-quality spatial data from .ply, .splat, or .ksplat files. Combine splats with point clouds, CAD, and multimedia to create rich, interactive, contextual environments for your training or engineering use case.

 

  • Smarter Training Creation with the Training Wizard: Quickly and automatically build structured virtual training modules in minutes. The new Training Wizard automatically transforms a series of steps into a fully functioning experience—complete with UI, navigation, and text instruction. You can launch immediately or fine-tune as needed. Pair with the new Text Editor to quickly review and edit your text-based content from a single streamlined UI. 

 

  • One-Stop Text Editing: Manage all instructional text from a single interface. The new Text Editing Mode lets authors view and edit training instructions in a streamlined table view, with real-time updates across your experience.

 

  • XR Step Menu for Headsets: Previously optimized for PCs, tablets, and phones, the new XR Step Menu now works seamlessly in AR and VR headsets. When a user is in immersive-mode (XR), the Step Menu will auto-position in the virtual 3D scene for optimum usability.

 

  • Instant Contextual Help: Need guidance while building your training? Contextual help icons now appear alongside each action property in the Training Editor, linking you directly to the relevant documentation—no need to search manually.

 

  • User Interface & Experience Enhancements:

    • UI elements now auto-scale and auto-position across devices and screen sizes

    • XR/AR/VR mode buttons are more visible and user-friendly

    • AR mode includes a better entry/exit experience and improved user positioning

 

  • Improved 3D Model Rendering: Rendering quality just got a major boost. GLB models now support emissive materials in addition to base color, metallic, roughness, and normals. Reflectivity is more realistic, and a new loading screen improves the end-user experience.

 

  • Better Performance for Complex Animations: Heavy models with advanced animations now run more smoothly—even on lower-end devices—thanks to optimization improvements across the animation engine.

 

  • Expanded Authoring Controls for Interactions: Authors can now assign actions to entire groups or model tree branches, making it easier to create interactive parts of an assembly. Additionally, new end-user interaction modes allow intuitive world manipulation and object movement.

 

  • Faster, Easier Training Authoring:

    • Up to 95% faster load times for states and training previews

    • Automatic unhighlighting and visibility control for edited objects

    • Fewer interruptions during editing with improved preview controls

    • Consistent style settings for buttons, text, and UI elements

    • Multimedia placement now adapts to all screen sizes with percentage-based scaling

 

 

Virtual Workspaces and Classrooms

 

  • Dynamic Scaling: Quickly resize 3D models directly within iQ3Connect Workspaces or the Training Editor. This makes content more adaptable for various environments and use cases.

 

  • Optimized Multi-User Collaboration: iQ3Connect Workspaces now scale more reliably, enabling up to 100 participants even over low-bandwidth connections, thanks to optimizations in server-client communication.

 

 

Enterprise Hub

 

  • Training Pack-n-Go: Export complete training modules—including all assets—for backup, transfer, or deployment across servers. This new “Pack-n-Go” feature simplifies content migration and supports offline storage for business continuity.

 

 

Let us know how these updates are helping you deliver smarter, faster, and more immersive training. To explore these new capabilities, contact us or start a free trial.

June 2025 Newsletter

iQ3Connect – Easy Digital Twin Creation

May 2025 Newsletter

iQ3Connect – Product Release and Quality Control Demonstrator

April 2025 Newsletter

iQ3Connect – Product Release and Quality Control Demonstrator

iQ3Connect Product Release v2025.2

iQ3Connect 2025.2 is here! We’re making XR training and work instruction creation easier, faster, and more powerful with new automation capabilities and AI/IoT integrations – so you can focus on what matters most: upskilling your workforce and improving operational efficiency. Key updates include Automated Experience Creation, a new Step Instruction Menu, and AI and IoT integration for work instructions, task verification, and digital twins. Explore the new capabilities of iQ3Connect 2025.2 below. 

 

Training and Work Instructions

 

  • Automated Experience Creation: Automatically create XR training and experiences using the new Training Wizard or quickly duplicate existing content and structure using the new copy/paste functionality. The Training Wizard will automatically create a fully functioning XR experience from 2 or more States (i.e. Scenes/Steps). The Training Wizard can also be used to automatically add the Next/Back navigation buttons to a step-by-step experience. The new copy/paste functionality enables the existing content (actions) or structure (timelines) to be quickly duplicated.  

 

  • Work Instruction Task Verification – AI + IoT Integration: Integrate artificial intelligence outputs and IoT sensor data into XR experiences. A new training action, External Signal Receiver, is now available which will listen for external data input and update the XR experience in real-time, either to report the data as-is or update the XR experience dynamically based on the content. These integrations are primarily targeted toward digital twins and work instruction task verification but can be applied to any use case.

 

  • Step Instruction Menu: The new Step Instruction Menu provides an improved UI that combines text information and menus/buttons. The Step Instruction Menu is completely customizable, from the colors, to the icons, to the number of buttons. The Step Instruction menu works in 2D, AR, and VR mode, whether mobile, tablet, PC, or HMD. 

 

  • XR Action – Time Tracking: Capture the time it takes to perform a step (or any arbitrary sequence) using the new Time Tracking actions. An unlimited number of durations can be tracked, with each time duration stored as a data point within the experience. This data can be automatically passed to an LMS system or easily exported to 3rd party tools.

 

  • XR Action – Flash Object: A new training action that will cause the defined object to flash. The flashing can be customized to change the visibility, transparency, or highlighting of the object.

 

  • Animations in GLB files: The ability to play animations embedded in GLB files has been expanded to include new end animation behaviors: reset, stop, loop. Reset will reset the animated objects back to their initial position/configuration, stop will leave the animated objects in their position/configuration as set at the end of the animation, while loop will cause the animation to play continuously. These settings are included as part of the GLB Animation action.

 

  • Augmented Reality – Virtual Hands: When entering AR-mode on a head-mounted display (HMD), the virtual hands will no longer be shown by default. Instead, the virtual menus and guides will now be mapped directly to the user’s physical hands, increasing the user’s visibility of the physical space. As part of this change, there is a new setting that can be used to turn back on the virtual hands if preferred and set that as the default behavior.

 

  • Text Box Behavior Improvements: Easily display text to end-users in 3D environments with our new Text Box behavior: Scene-Follow. When set to Scene-Follow mode, text boxes will appear in front of the user and will gradually follow them around the 3D environment. 

 

  • XR Performance Improvements: The lag and frame rate drops for animations and state transitions (i.e. LoadState actions) have been greatly reduced, leading to an overall smoother experience for the end-user.

 

  • Improved UI/UX for Experience Authoring: Some important elements of the experience creation process have been updated to provide an improved UI/UX. The Training Add Action menu has been reorganized to provide a more intuitive order, while the Create State and Add Timeline buttons have been moved to the relevant section header, eliminating the need to scroll to access these functions.

 

  • Reduced the Number and Frequency of the Movement Locked and Orientation Locked Messages: When an end-user is in an XR experience with the movement or orientation locked, they are automatically provided with a message when trying to move or rotate to inform them that their controls are locked. These messages were occurring too often which could disrupt the overall experience. Thus, the number and frequency of these messages has been greatly reduced.

 

 

Virtual Workspaces and Classrooms

 

  • Real-Time Digital Twins – AI + IoT Integration: Virtual workspaces can now be linked to data sources such as IoT sensors and AI output to create digital twins that are updated in real-time based on the real-world environment. These integrations are primarily targeted toward digital twins and work instruction task verification but can be applied to any use case.

 

  • Save Scene in 2D Menu: The Save Scene button has been added to the standard 2D menu, making it more readily accessible. The Save Scene button can now be found in the upper left corner menu: My Content > Scenes & Sessions, see image below. Note: It can still be found in the XR menu as well.

 

 

Enterprise Hub

 

  • Launch Workspace Improvements: Launching a Workspace has been streamlined to provide easier access to XR models and Workspace settings. Launching a Workspace directly from the Project Home screen will now automatically select all of the available XR models in that Project. Alternatively, entire folders can now be selected from the XR Models page when launching a Workspace. Finally, a new quick settings menu will appear when launching a Workspace which provides options to change the workspace duration, public/private setting, and menu enabled/disabled.

 

  • Project Management and Default Template Editing: The ability to manage and administer Projects has been expanded. The Default Project Template can now be edited. Project Templates can now be used to update Projects (individually or in bulk) even after the Project has been created. The background environment can now be customized as part of the Project Template.

 

  • UI/UX Improvements for Inviting Users to Projects and Accessing Multimedia: The UI/UX for inviting users to Projects has been greatly improved, making it easier to quickly invite users and understand what invitations are still pending. Additionally, the Multimedia page is now the first page shown when the External Assets tab is selected.

 

  • Announcements Readability Improvement: When an announcement is selected, it is now fully expanded as a pop-up to improve readability

February 2025 Newsletter

iQ3Connect – AR Work, Multiuser Collab, 3D Model Optimization

January 2025 Newsletter

iQ3Connect – Quest 3 Boundary and Rendering Enhancements

iQ3Connect Product Release v2025.1

iQ3Connect v2025.1 introduces a range of enhancements to expand the capabilities of web-based spatial training and work instructions, while improving the ease of XR content creation. Key updates include intuitive touch-based AR alignment, new training actions and triggers to improve interactivity, navigation and event-responsiveness, enhancements to tracking and logic to simplify performance monitoring and adaptive adjustments, and a streamlined UI/UX to make the training creation process even easier. Explore the new capabilities of iQ3Connect 2025.1 below. 

 

Training and Work Instructions

 

  • Touch-based AR Alignment – Head Mounted Displays (HMDs): Precise alignment of the virtual and physical environments can now be achieved through intuitive touch-based alignment with the controllers or hands. This new feature is compatible with any AR headset and only requires the end-user to touch 2 designated points in the physical environment to achieve alignment. This new process works without markers and can align to any arbitrary surface, regardless of size, shape, or material.

 

  • XR Action – Move Objects: A new training action has been added to the iQ3Connect Training Studio – Move Objects.  Creators of XR experiences can now visually define which virtual objects end-users can interact with and manipulate. Whitelist objects to quickly define a small number of interactive objects, leaving everything else locked-down, or blacklist objects to quickly lock down (make non-interactable) a few objects while making everything else interactable.

 

  • XR Action – Wayfinding: A new training action has been added to the iQ3Connect Training Studio – Wayfinding. The wayfinding action displays a virtual path toward a specific part/location in the environment, enabling end-users to more easily find the part or navigate to the desired location. Wayfinding can be paired with a User Position Trigger to automatically spawn events and actions once the user arrives at the destination.

 

  • XR Trigger – User Position: The new User Position Trigger can spawn actions and events based on the distance of the user to a designated object. Use this trigger to detect once the user has reached within a certain distance of the desired object or location. This trigger is available from the Wayfinding Action and can be used with or without wayfinding’s virtual path.

 

  • XR Trigger – Object-to-Object Proximity: The new Object-to-Object Proximity Trigger can spawn actions and events based on the distance between 2 virtual objects. Use this trigger to detect once the distance between 2 objects falls below a defined threshold distance, such as when verifying if an end-user has correctly placed a virtual part/tool/etc.

 

  • Tracking, Logic, and Variables: Our extensive tracking, logic, and variable system is now accessible directly from the Training Studio, meaning that scripting is no longer required. Our new visual interface makes it easy to record end-user outcomes (such as time to completion, incorrect steps, answers to questions, etc.) through a flexible tracking system. Define and change variables based on the metrics to be tracked and the performance of the end-user. Variables can be used in combination with logic statements to adjust the training dynamically to user performance and user selections.

 

  • Camera and Navigation Control: A creator’s ability to control the camera and navigation of the end-user has been drastically simplified with improvements to the Training Studio. Locking camera movement and/or orientation is now controlled through toggle switches in the State properties. Navigational boundaries (such as preventing end-users from walking through walls) can be set up visually, including the setup of stacked navigational rules (i.e. a user is allowed to move throughout a room but not through the objects within a room).

 

  • Download Training Data and Results: Training data and results can now be downloaded directly from the iQ3Connect Hub. In Workspace Templates > Past Workspaces, a new Download Training Results icon is available next to each past workspace.

 

  • Training Creation UI/UX Improvements: Some basic UI/UX improvements have been made to improve the efficiency of training creation. The training studio will now open automatically when a training is created, and the starting timeline is now premade. The action property viewer will automatically open when an action is added to a timeline. A new start/stop button will enable seamless transitions between authoring and training preview modes.

 

 

Virtual Workspaces and Classrooms

 

  • Improved Object Movement: Object movement has been drastically improved to provide for faster and more accurate movement of objects within the 3D environment. Improvements have been made for both PC/tablet and XR movement controls. Movement is now controlled via click-and-drag (replacing the old click-on to move, click-off to stop movement system) while a simple move speed modifier (such as holding the shift key or clicking on an on-screen button) allows users to seamlessly transition between fast movement and precision movement.

 

  • Improvements to Rejoining a Workspace: When attempting to join a meeting multiple times from the same account, preference will be given to the most recent connection, removing any older connections from the workspace. To prevent accidental or malicious removal, the older connection will be prompted to accept or reject the newer connection. Acceptance of new connection (or inactivity in responding to the prompt) will then remove the old connection and allow the new connection into the workspace.

 

  • Information Tag Status Indicator and Render Improvement: When viewing the list of information tags in a workspace, active info tags (tags already added to the workspace) will now be highlighted in the list to provide a visual indication of what’s in the workspace and what’s not. Info tag placement on 3D objects has also been improved, minimizing improper occlusion of the info tag to improve its visibility.

 

  • Default Environment Update: The default background environment for the iQ3Connect Workspace has been updated.

 

 

Enterprise Hub

 

  • Simplified Help Access: Accessing the iQ3Connect Knowledge Base has been greatly simplified. The help icon now redirects users to the knowledge base directly.

 

  • XR Model Organization – Grid View: Grid View (as opposed to the default list view) allows users to view an enlarged snapshot of all their XR models in an organized grid directly from the iQ3Connect Hub.

Icon

Form Submitted!

Thanks for submitting the form. An iQ3Connect representative will be reaching out to you shortly. In the meantime, please feel free to review some of our resources below.

Icon

Form Submitted!

Thanks for submitting the form. An iQ3Connect representative will be reaching out to you shortly. In the meantime, please feel free to review some of our resources below.

Icon

Form Submitted!

Thanks for submitting the form. An iQ3Connect representative will be reaching out to you shortly. In the meantime, please feel free to review some of our resources below.

Icon

Form Submitted!

Thanks for submitting the form. An iQ3Connect representative will be reaching out to you shortly. In the meantime, please feel free to review some of our resources below.