Philosophy of Progress and Iteration
My academic journey is guided by a philosophy of progress over results, emphasizing discovery and iteration as pathways to innovation. This approach has shaped my identity as a maker and researcher, inspiring me to deconstruct existence systems, replicate unknown mechanisms, and build upon prior knowledge. My work bridges creative design with computational methodologies, embodying a recursive process of learning that enriches the fields of computational design and HCI.
Origami serves as a microcosm of this philosophy. The iterative act of folding exemplifies the pursuit of refinement, where each step enhances understanding and technique. This recursive approach parallels my academic and professional endeavors, where I continuously refine my methodologies to explore adaptive systems, generative processes, responsive environments, especially considering human as the center, how to iteratively and actively to fulfill their need, and not only limited to design for a individual or specific of group of people, but migratable to any people or groups.
Research Journey and Academic Preparation
My academic foundation in architecture earned through a Bachelor of Architecture at Syracuse University, fostered a deep appreciation for design’s creative and problem-solving dimensions. However, my curiosity extended beyond design to the underlying tools and systems enabling interaction and creativity. This led me to pursue a Master of Science in Computational Design (MSCD) at Carnegie Mellon University, where I merged architectural principles with computational techniques and HCI.
During my undergrad thesis, I applied generative AI techniques to residential design, integrating interior circulation and solar radiation analysis. While successful, this project highlighted a key limitation in static design paradigms: their inability to address evolving user needs. This insight motivated my shift toward adaptive systems that dynamically respond to user behaviors and environmental changes.
Courses like Physical Computing and Generative AI have provided hands-on experience with tools from microcontrollers, sensors, actuators to diffusion models, and fine-tuning techniques like LoRA. These skills equip me to develop computational systems that evolve iteratively and adapt dynamically. For example, in the Enhancing Origami Learning project, I designed a platform and user interface that compares different 3d reconstruction techniques, such as photogrammetry, lidar with a spin table, video to 3D model, and NeRF. This platform is for both origami tutorial makers and learners, demonstrating my ability to integrate technical tools with creative problem-solving.
Additionally, in MetaController, published in UIST 2024 SIC, I developed a modular system as a game controller, combining digital fabrication, physical computing, and human-centered design. By refining existing approaches through iterative prototyping, we created a system that offers flexibility for game designers and players alike. Beyond its functional outcome, the iterative process of documenting insights, identifying challenges, and exploring alternatives contributed valuable knowledge to the field. This project deepened my understanding of modular and adaptive systems and reinforced the importance of iteration in creating robust, user-centered solutions.
Proposed Research and Intellectual Goals
My research goals center on leveraging my maker philosophy to develop adaptive systems and responsive environments that push the boundaries of design and technology. Specifically, I am interested in creating spaces that actively and iteratively adapt to people’s changing needs.
Historically, architecture has been constrained by its reliance on static materials designed primarily for durability and stability. Over time, architectural styles have evolved—from Roman to Gothic, Baroque, Neoclassical, and Modern—bringing greater flexibility in form and function. The advent of modern architecture, with its revolutionary use of materials and structural innovations, has enabled more variable spatial configurations. Yet, even the most dynamic designs, such as those offering movable walls or modular layouts, remain static in their inability to actively and intelligently respond to changing user needs. These designs lack the capability to learn from human behavior and to evolve in real time.
This limitation inspired me to imagine a new paradigm: a system capable of dynamically fulfilling people’s needs by sensing, learning from, and responding to human behavior. Such a system would involve three core components: Detection, Learning, and Responding.
This vision aligns with my belief that architecture should not merely accommodate but actively enhance human experience by evolving with its occupants. And in detail of each key point:
Detection
My exploration into detection methods is informed by projects such as Enhancing Origami Learning. As mentioned earlier, this project involved evaluating multiple 3D reconstruction techniques like photogrammetry, LiDAR, and NeRF. Through this effort, I gained hands-on experience with computer vision and the integration of diverse sensing technologies, which are foundational to detecting human behavior in adaptive systems.
To further structure detection, I am exploring the integration of graphs with shape grammar as a way to formally represent spatial configurations and human-environment interactions. Graphs provide a dynamic structure where nodes represent spatial elements (such as objects, users, and furniture) and edges define relationships (such as proximity, movement, and interaction). Shape grammar serves as a rule-based framework that operates on these graph structures, allowing for the identification of recurring spatial and behavioral patterns. By combining graph-based detection with shape grammar transformations, I aim to develop a method that can recognize and adapt to spatial behavior dynamically.
Learning
In the learning domain, my coursework in machine learning and generative AI has equipped me with both foundational and cutting-edge knowledge. I am particularly interested in exploring methods such as encoding behavioral data into latent spaces or vector quantization spaces and leveraging multimodal architectures to integrate data from various sources. Shape grammar offers an alternative approach to encoding spatial and behavioral patterns, allowing for a structured representation of adaptive responses. By integrating shape grammar with machine learning, I aim to explore hybrid models that combine rule-based spatial reasoning with data-driven learning approaches. These methods will enable me to experiment with interpreting human behavior and environmental data to inform adaptive responses.
Responding
My experience with projects in metaControllers, physical computing, soft robotics, and the Internet of Things (IoT) has honed my ability to prototype and iterate quickly. I have explored various materials, fabrication methods, tools, and electronic components to develop tangible, interactive systems. This hands-on expertise will support my work in creating adaptive materials and structures capable of responding to learned insights. For example, I aim to prototype systems that can physically transform based on real-time data, bridging the gap between digital models and physical outputs.
Broader Vision
Guided by the ethos shared by the UIST 2024 keynote speaker—“Make impossible things possible, and make possible things easier”—I strive to create tools that expand existing capabilities. This philosophy resonates deeply with my approach to adaptive systems. My skills in machine learning (e.g., RNNs, Transformers) and computer graphics position me to integrate data-driven models into physical systems.
As part of my PhD research, I aim to: