AllVirtual RealityAugmented RealityHuman-Computer InteractionRoboticsMetaverseSoft Robotics
Cross-Reality Lifestyle

Cross-Reality Lifestyle: Integrating Physical and Virtual Lives Through Multiplatform Metaverse

IEEE Pervasive Computing 2025

Metaverse
Virtual Reality

We propose the concept of “Cross-Reality Lifestyle,” where activities in physical and virtual spaces are seamlessly combined rather than treated as separate domains. To analyze how these activities integrate, we introduce the ACE Cube framework, which captures three patterns of integration: Amplification, Complementary, and Emergence. Through an analysis of over 200 real-world cases, we offer actionable insights for platform design and technology investment prioritization, along with design principles for integrating the physical and the metaverse into everyday life.

IEEE Xplore arXiv

MetaGadget

MetaGadget: A Framework for Integrating IoT with Metaverse Platforms

IEEE Pervasive Computing 2025, IEEE ISMAR 2024 Demo

Metaverse
Virtual Reality
Human-Computer Interaction

We propose MetaGadget, a framework that simplifies connecting IoT devices to commercial metaverse platforms. By adopting HTTP-based event triggers instead of persistent connections, it significantly lowers the technical barrier for developers and educators. We validated the framework through two workshops involving smart home control and custom device integration, demonstrating its effectiveness. The results also revealed new application possibilities such as multi-user simultaneous interaction and cross-space coordination, bridging the virtual and physical worlds.

IEEE Xplore arXiv

Fluidic Swarm

FluidicSwarm: Embodiment of Swarm Robots Using Fluid Behavior Imitation

ACM SIGGRAPH 2025 ETech

Robotics
Human-Computer Interaction

We proposed FluidicSwarm, a control system that treats robot swarms as fluid-like entities, enabling users to intuitively operate them as extensions of their own bodies. The system allows users to manipulate the characteristics of the fluid that the robot swarm mimics through hand movements, thereby changing properties such as the swarm’s shape and flexibility. This functionality enables users to easily and flexibly control large numbers of robots for diverse tasks including obstacle avoidance and object transportation.

Detail ACM DL

MagicCraft

MagicCraft: Dynamic 3D Object Generation for Metaverses via Natural Language Input

IEEE Access 2025, IEEE VR 2025 Demo

Metaverse
Virtual Reality

We propose MagicCraft, a system that generates dynamic, interactive 3D objects from natural language prompts and instantly deploys them into metaverse environments. The system integrates the entire pipeline from image generation to 3D modeling, scaling, and behavior scripting, enabling users without specialized expertise to create complex objects. Through implementation on the cluster platform and user experiments, we demonstrated that MagicCraft significantly reduces creation time compared to conventional methods, contributing to the democratization of content creation.

IEEE Xplore arXiv

SPH

Swarm Robot Navigation in Obstacle-Unaware Environments Using Smoothed Particle Hydrodynamics

Advanced Robotics 2025, IEEE IROS 2024

Robotics

We propose a swarm robot control method based on an Indirect Collision Detector and Smoothed Particle Hydrodynamics (SPH) that enables a robot swarm to navigate to target positions while avoiding obstacles, even without environmental sensing capabilities. The controller indirectly detects contact points between robots and obstacles using only positional and velocity information. By combining repulsive forces from these contact points with fluid-like swarm formations generated through SPH, the method achieves reliable and efficient obstacle avoidance.

Detail T&F Online arXiv YouTube

ChromaGazer

ChromaGazer: Unobtrusive Visual Guidance Using Imperceptible Color Vibrations

IEEE TVCG 2025 (IEEE VR 2025)

Augmented Reality
Human-Computer Interaction

We propose ChromaGazer, an unobtrusive visual guidance method that leverages color vibrations to direct gaze without causing noticeable discomfort. The technique exploits the phenomenon whereby rapidly alternating colors at 25 Hz or above become imperceptible, enabling gaze guidance toward target regions without users consciously noticing any flicker. User experiments confirmed that the method reduces visual search time in natural images while preserving the perceived naturalness of the scene. We also discuss potential applications in advertising, reading assistance, and AR task support.

IEEE Xplore arXiv

MagicItem

MagicItem: Dynamic Behavior Design of Virtual Objects With Large Language Models in a Commercial Metaverse Platform

IEEE Access 2025

Metaverse
Virtual Reality
Human-Computer Interaction

We proposed MagicItem, a tool that generates object behaviors in VR spaces from natural language on Cluster, a commercial metaverse platform. By integrating GPT-4 with Cluster’s proprietary scripting language and embedding TypeScript definition files into the prompt, users without programming experience can define dynamic object behaviors through natural language alone. A large-scale online experiment with 63 general users demonstrated that even participants with no programming background successfully completed behavior modification tasks with high satisfaction.

IEEE Xplore

Micro-Elastic-Pouch-Motors

Micro Elastic Pouch Motors: Elastically Deformable and Miniaturized Soft Actuators Using Liquid-to-Gas Phase Change

IEEE RA-L 2021 (IEEE RoboSoft 2021)

Soft Robotics
Robotics

We proposed Micro Elastic Pouch Motors, a largely deformable and miniaturized soft actuator that is made by an elastic rubber pouch with a low-boiling-point liquid. When the temperature of the low-boiling-point liquid reaches 34 ˚C, the liquid inside the pouch evaporates, and the whole structure inflates. Thanks to the proposed fabrication method, we can make a miniaturized pouch of approximately 5 mm in diameter with a thin rubber membrane, and the pouch can inflate to a volume of 86 times or more compared to its initial volume and can generate approximately 20 N at maximum.

IEEE Xplore YouTube

CoVR

CoVR: Co-located Virtual Reality Experience Sharing for Facilitating Joint Attention via Projected User-Perspective View

ACM SIGGRAPH Asia 2020 ETech

Virtual Reality
Human-Computer Interaction

VR experience sharing between users wearing head-mounted displays (HMD users) and users not wearing HMDs (Non-HMD users) is a promising approach to bridge the experience gap between these users. We proposed “CoVR,” a co-located VR sharing system for the HMD and Non-HMD users via projected user-perspective images, using an HMD device with a head-mounted focus-free projector. We introduce a design methodology of displaying images considering the viewpoint and the perspective of images presented to the HMD and Non-HMD users with additional information to the images and demonstrate applications.

ACM DL