interactivestructure - tu darmstadt · interactivestructure...

9
Interactive Structure Robotic Repositioning of Vertical Elements in Man-Machine Collaborative Assembly through Vision-Based Tactile Sensing Bastian Wibranek 1 , Boris Belousov 2 , Alymbek Sadybakasov 3 , Jan Peters 4 , Oliver Tessmann 5 1,5 Technische Universität Darmstadt, DDU (Digital Design Unit), Department of Architecture 2,3,4 Technische Universität Darmstadt, Department of Computer Sci- ence 1,5 {wibranek|tessmann}@dg.tu-darmstadt.de 2,4 {belousov|peters}@ias.tu-darmstadt. de 3 [email protected] The research presented in this paper explores a novel tactile sensor technology for architectural assembly tasks. In order to enable robots to interact both with humans and building elements, several robot control strategies had to be implemented. Therefore, we developed a communication interface between the architectural design environment, a tactile sensor and robot controllers. In particular, by combining tactile feedback with real-time gripper and robot control algorithms, we demonstrate grasp adaptation, object shape and texture estimation, slip and contact detection, force and torque estimation. We investigated the integration of robotic control strategies for human-robot interaction and developed an assembly task in which the robot had to place vertical elements underneath a deformed slab. Finally, the proposed tactile feedback controllers and learned skills are combined together to demonstrate applicability and utility of tactile sensing in collaborative human-robot architectural assembly tasks. Users were able to hand over building elements to the robot or guide the robot through the interaction with building elements. Ultimately this research aims to offer the possibility for anyone to interact with built structures through robotic augmentation. Keywords: Interactive Structure, Robotics, Tactile Sensing, Man-Machine Collaboration Interaction - RESPONSIVE ENVIRONMENTS - Volume 2 - eCAADe 37 / SIGraDi 23 | 705

Upload: others

Post on 24-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

Interactive StructureRobotic Repositioning of Vertical Elements in Man-Machine CollaborativeAssembly through Vision-Based Tactile Sensing

Bastian Wibranek1, Boris Belousov2, Alymbek Sadybakasov3,Jan Peters4, Oliver Tessmann51,5Technische Universität Darmstadt, DDU (Digital Design Unit), Department ofArchitecture 2,3,4Technische Universität Darmstadt, Department of Computer Sci-ence1,5{wibranek|tessmann}@dg.tu-darmstadt.de 2,4{belousov|peters}@ias.tu-darmstadt.de [email protected]

The research presented in this paper explores a novel tactile sensor technologyfor architectural assembly tasks. In order to enable robots to interact both withhumans and building elements, several robot control strategies had to beimplemented. Therefore, we developed a communication interface between thearchitectural design environment, a tactile sensor and robot controllers. Inparticular, by combining tactile feedback with real-time gripper and robot controlalgorithms, we demonstrate grasp adaptation, object shape and textureestimation, slip and contact detection, force and torque estimation. Weinvestigated the integration of robotic control strategies for human-robotinteraction and developed an assembly task in which the robot had to placevertical elements underneath a deformed slab. Finally, the proposed tactilefeedback controllers and learned skills are combined together to demonstrateapplicability and utility of tactile sensing in collaborative human-robotarchitectural assembly tasks. Users were able to hand over building elements tothe robot or guide the robot through the interaction with building elements.Ultimately this research aims to offer the possibility for anyone to interact withbuilt structures through robotic augmentation.

Keywords: Interactive Structure, Robotics, Tactile Sensing, Man-MachineCollaboration

Interaction - RESPONSIVE ENVIRONMENTS - Volume 2 - eCAADe 37 / SIGraDi 23 | 705

Page 2: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

1 INTRODUCTIONIn architectural design, the use of robotics enabledarchitects to conceptualize a stronger bridge be-tween the digital and physical worlds. This has led tothedevelopment of design and fabricationprocessesthat rely on sensory feedback and focus on editabil-ity and adaptability during their finalization. In as-sembly processes, there are several delicate points inwhich sensor input is beneficial. This includes knowl-edge about the part being grasped by the robot andmeasurements of accuracy of its placement.

By augmenting robots with custom tools for de-sign and fabrication processes, a new class of de-sign concepts is emerging (Gramazio and Kohler,2008; Menges, 2015). In our context, we considerthose concepts as “Interactive Assemblies (IA)”. Theyare characterized by real-world inputs through sen-sors and robotic assembly processes which rely on abuilding plan that can be edited by users (Wibraneket al., 2019).

The input for robotic assemblies usually is ac-quired through different types of sensors and trans-lated into actions by a robotic controller. Commonsensor types are vision-based depth sensors or forcetorque sensors. Robot controllers combine hard-ware and software to program a robot. Robot con-trollers required in architecture have to handle in-teraction with material substances and contact withbuilding elements. For assembly tasks, several con-troller strategies have to be connected.

Those tasks include picking up an element andplacing it at a desired place. Sensor serves as an inter-face that can be programmed to trigger certain tasks.For instance, if an object is inserted into the gripper,the gripper will close and the robot will wait for fur-ther instructions. When pulling a grasped object, therobotwill follow theobject in thedirectionof thepull.Thereby, users can provide input through physical in-teraction.

The existing integration of sensors focuses ona specific parameter within the materialization pro-cess, such as measurement of geometry or pressureat critical points in manufacturing to avoid fabrica-

tion errors. However, these sensors lead to funda-mental limitations when it comes to input for assem-bly and design strategies.

This paper proposes the use of an emerging classof inexpensive vision-based tactile sensors to en-hance human-guided robotic assembly tasks in ar-chitectural applications with a sense of touch. Tac-tile sensors expand the capabilities of classical force-torque sensors by allowing for estimation of objectshape andmaterial properties, localization of objectsinside the gripper, and detection of slip and contactevents. Leveraging the advantages of tactile sensing,humans and robots can interactively collaborate byrepositioning building elementswithin a given struc-ture.

Figure 1Robotautonomouslyinserting a buildingelementunderneath adeformed slab.

We demonstrate the first use of the FingerVision (Ya-maguchi and Atkeson, 2017) tactile sensor in archi-tectural assembly, enabled by a novel combinationof two types of information provided by the FingerVi-sion sensor: surface deformation and optical flow es-timation. Tactile sensors provide important contact-level information such as contact geometry, force,material properties, and contact events (Cutkosky,Howe and Provancher, 2008). Tactile feedback wasused, e.g., in tactile exploration of object properties(Lepora, Martinez-Hernandez and Prescott, 2016),grasping (Bohg et al., 2014), object and tool manipu-lation (Chebotar, Kroemer and Peters, 2014; Ramón,Medina and Perdereau, 2013).

Our main contributions are real-time monitor-ing of a scene on a global (position and orientationof elements) and local level (applied force, objectproperties) and streamed into the digital architec-turalmodel. Robotic control strategies based ondata

706 | eCAADe 37 / SIGraDi 23 - Interaction - RESPONSIVE ENVIRONMENTS - Volume 2

Page 3: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

Figure 2The Sake EZGripperequipped with twoFingerVisionsensors (left) andthe sensorshardware parts(right).

from an active vision system and a tactile sensor. Anarchitectural experiment speculating on the possibil-ity to relocate structural elements within a construc-tion by a robot driven by tactile feedback.

In our research, we question the discrete stateparadigm of architecture and aim for a continuousapproach, which does not require positions of all ele-ments to be predefined in advance. Instead, it allowsbuildings to be reconfigured according to changessuch as new load distributions and varying program-matic needs.

2METHODThe FingerVision sensor is an inexpensive alterna-tive to other force sensors such as capacitive, piezo-resistive and piezoelectric. In contrast to those, theFingerVision sensor is cheap due to the use of a com-mon and cheap LCD camera. The tactile sensor isbased on the original design by A. Yamaguchi and C.G.Atkeson (Yamaguchi andAtkeson, 2017). A camerasensor is mounted behind a transparent layer whichconsist of an acrylic carrier and a soft transparent sil-icon bed with embedded black dots (Fig. 2). Lat-est developments in computer vision algorithms en-

able estimation of contact forces, slip of an objector its orientation between fingers, all based on rawvideo data. The force sensing is based on tracking ofthe black dots, while the slippage and orientation ofgrasped objects are tracked through proximity vision(Fig. 3).

Figure 3The data modalitiesof the FingerVisionsensor: blobdetection with noobject (a), appliedforce to an object,red lines indicatethe measurementsin the direction (b),no object inslip-detectionmode (c) and aninserted objectrecognized by thesensor, blue pixelsshow the object (d).

In order to enable robots to interact both with hu-mans and building elements, several robot controlstrategies had to be implemented. Those can bedivided into three subcategories. First, tactile skills

Interaction - RESPONSIVE ENVIRONMENTS - Volume 2 - eCAADe 37 / SIGraDi 23 | 707

Page 4: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

that enable the robot to handle elements based onthe FingerVison sensor. Second, collaboration strate-gies in which the robot has to understand humanbehaviour expressed through building elements, likepushing or pulling an object and hand over of an ob-ject. Third, an action-based strategy to assemble asmall-scale architectural model based on force feed-back.

The setup for our experiments consisted of twolightweight KUKA arms each with 7 degrees of free-dom. One robot was equipped with a Realsensecamera to obtain visual information of the scene viaArUco markers. On the second robot, the FingerVi-sion sensor was mounted. For path planning, theMoveIt! package was used, and for task execution,the ROBot COMmunication interface (robcom) wasemployed.

2.1 ROBOTIC TACTILE SKILLSFor our first experiments, we developed a tactile-feedback robot controller (Fig. 4). It consists of aunit to interpret the sensor data, a path planning unitand a robot controller for execution. Different controlstrategies were implemented to enable robotic han-dling of building elements.

An inspection strategy that can locate buildingelements in a scene, identify its length and grasp theelement based on the gathered knowledge. There-fore, we had to link the object area obtained by theFingerVision sensor with the robot’s position whilemoving along an object.

Additionally, we implemented a strategy that en-abled robotic in-hand rotation of an element. The ex-perimental setup is as follows. We first grasp a build-ing element and place the gripper such that the ele-ment is parallel to theground. Thenwe start to slowlyopen thegripper andobserve the slippage signal. Wenote the slippage value once the gripper drops theelement. This value will be the slippage threshold forthe given object.

Figure 4A schematic designof a generaltactile-feedbackrobot controller.The path planningunit plans thetrajectory andsends the desiredpose of the endeffector to therobot control unit.The robot controlunit computes thetrajectory andexecutes it andsends back the newposition to the pathplanning unit. Thepath planning unitrecomputes thetrajectory based onthe current tactileinformationprovided byFingerVision.

2.2 FORCE-BASED HUMAN-ROBOT INTER-ACTIONFor robotic augmentation of humans in on-site con-struction and design, new robotic skills have to bedeveloped. We implemented a robot controller thatcan identify if a part is actively inserted by a humanco-worker into the gripper and waits for it to be acti-vated through contact with the FingerVision sensor.When all those conditions aremet, the gripper closes.

Real-time readings from the FingerVision sensorallow to determine the directionality of forces. Weprogrammed a controller to enable users to push orpull building elements carried by the robot to de-sired locations. This controller moves the robot endeffector in the direction of the average force. Robotcommands were computed on the fly, enabling real-time feedback between the human operator and therobot.

Figure 5Schematic diagramof the possibleinteractions whichcan be limited tocertain axis orrotations. The visualtracking system isintroducing a levelof redundancy thatwould guaranteesuccessfullocalization insituations of lateradjustments ofelements.

708 | eCAADe 37 / SIGraDi 23 - Interaction - RESPONSIVE ENVIRONMENTS - Volume 2

Page 5: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

Figure 6Conceptualdiagram illustratingdifferent positionsof vertical elementswith their threedegrees of freedombetween twosurfaces (a).Karamba simulationof displacementwith positioning ofone elementoutside the loadlocation (b) andpositionedunderneath theload (c).

2.3 ASSEMBLY TASKIn order to tested our approach of tactile sensingin an architectural setting, we conducted an exper-iment in which a robot places vertical building ele-ment beneath a deformable horizontal surface (Fig.6). The vertical elements can be placed in any ori-entation or position underneath the surfaces in thexy-plane. Simulating a scenario in which the exactsurface behaviour is not known in advance, we il-lustrate the variability of conditions on a construc-tion site. An ArUco marker vision setup was used toestimate absolute positions of elements. The robotwas programmed to insert the vertical element be-tween the horizontal surfaces. Collisions, object slip,in-finger object orientation, and contact forces wereestimated from the tactile sensor readings. By settinga certain threshold or searching for themaximumap-plied force, the robot autonomously finds the posi-

tion for the element.The human co-worker can pull elements into

desired positions while the robot keeps the grasp(Fig.1). A digitalmodel in Rhino/Grasshopper orches-trate the assembly and visualize the as-built posi-tions of the vertical elements (Fig. 2). Therefore, wedeveloped a communication interface between thearchitectural design environment (Rhino/Grasshop-per), the sensor and gripper controllers via the robotoperating system (ROS), and the Kuka LBR iiwa robotvia the SL simulation and real-time control softwarepackage (Schaal, 2006).

3 RESULTSTo validate our approach to use FingerVision basedcontrol strategies for robotic assembly tasks, we rundifferent experiments. Real-time feedback allowed a

Figure 7Schematicrepresentation ofthe proposedcollaborativepositioningapproach.

Interaction - RESPONSIVE ENVIRONMENTS - Volume 2 - eCAADe 37 / SIGraDi 23 | 709

Page 6: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

Figure 8The robotinspecting anelement: beforeapproaching it (a),while maximumobject area isdetected (b), at theboth ends of theobjects (c, d) andwhen the object islifted (e).

human to guide the robot to a new position for anelement. No programming was needed - just man-ual guidance by the human. We were able to build asmall-scale model in which a robot can move a ver-tical element into the place of highest load. Insteadof having a prescribed position, the load position isfound through the use of a tactile sensor.

Figure 9Object area incorrelation with theposition of thesensor.

Figure 10The robotperformancein-hand rotationbased on theproximity visioncontroller.

Figure 11The degree of theobject is measuredthrough theproximity vision

3.1 OBJECT DETECTION AND HANDLINGTHROUGH TACTILE SKILLSWhen building element or its position is undefined,it is necessary to gather this knowledge from thescene. We were able to implement a robotic inspec-tion strategy which scans along an object to deter-mine its shape (Fig. 8). Wewere able to show that thissensor allows tight coupling between building ele-ments and the robot. The FingerVision sensor gener-ates sufficient data for locating an object within thegripper. The gripper location was linked with the ob-ject area to determine when the object leaves theFingerVision gripper (Fig. 9). Locations of inspectedelements were fed back into the architectural digi-tal model. The robot successfully grasped the objectand lifted it.

Figure 10 shows how the robot rotates an ele-ment within the gripper. The rotation of the objectwas measured through the proximity vision. There-fore, it combines area and position of the objectwithin the gripper.

3.2 FORCE-BASED HUMAN-ROBOT INTER-ACTIONIn order to test the tactile human-robot collabora-tion, we set up different interactive tasks. The ex-perimental setup is shown in Figure 12. We applieddifferent forces in order to change the speed of therobot movement. Following a stick in y-direction bypulling (left) and pushing (right) is based on the av-erage contact force vector (Fig. 13). The controllerregularized the force signal such that the force signalalways tended to be near the desired force value, i.e.the force signal at the previous time step (Fig. 14).

During the handover task, a human put the stickbetween the fingers (Fig. 15). Our robot controllerclosed the gripper once the object was between thefingers and touches the contact medium of FingerVi-sion sensor. In all handover attempts, the grippersuccessfully started closing once the stick touchedthe sensor until there was no slippage detected (Fig.16). Additionally, the controller provided enoughtime for adjusting the orientation and the position-ing of the stick.

710 | eCAADe 37 / SIGraDi 23 - Interaction - RESPONSIVE ENVIRONMENTS - Volume 2

Page 7: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

Figure 12Human applyingforce by pulling anobject and robotfollowing in thedesired direction.Figure 13The deformation ofthe black dots istracked by analgorithm andtranslated into forcevectors. Based onthe average ofthose vectors wecalculated theaverage force. Therobot movedaccording to thestrength anddirection of this. Noforce detected (a),average forcepointing to the left(b) and averageforce pointing tothe right (c)

3.3 ASSEMBLY TASKThe combination of different control strategies is anecessary step for robots to be useful on construc-tion sites. In order to investigate the integration ofrobotic control strategies into an architectural fabri-cation, we developed an assembly task in which therobot had to place vertical elements underneath adeformed slab.

Our system integrated the robotic control strat-egy to locate an object and grasp it according to thetask. Due to the scale of the physical model, we im-plemented the positioning in the software environ-ment instead of a human interactive guidance.Therobot was able to place the element at the positionof highest force (Fig. 18). As we can see in figure17, after inspection the part is rotated, positionedbe-tween the slabs and pushed until a certain thresholdis reached.

Figure 14One example offorce readings inrelation to theposition, left forpulling

The customised digital model captured the place-ment of the elements. We were able to combinethe position based of the robot movements with theArUco markers and record the force used at the vari-ous positions within the structure. During all insert-ing steps, one could observe the scene in Grasshop-per in the real time. If a position of the inserted stickdidn’t meet the desired position, it can move it to adesired position inside the box model through exe-cuting one of the object following controllers intro-duced earlier.

4 DISCUSSIONOur research investigated how man and machinecan interact through building elements to reposi-tion them within a structure. We implemented asystem that consists of moveable vertical elements,a robot controller using tactile sensor data, man-machine collaboration and a digital model which or-chestrates the system. We developed several roboticcontrol strategies based on the FingerVision sensorand merged them into one assembly task.

Although our experiments were conducted insmaller scale, they illustrate how different strategiescan be coupled into one assembly task. The con-trollers were tested in rather simple experiments thatfocused on building up an understanding for thetechnology of the FingerVision sensor. Nevertheless,the experiments highlight how a sensor that offersseveral modalities can become an interface betweenhuman, machine and building elements.

Previous research on sensor feedback in man-machine collaboration for fabrication was used, e.g.to capture human gestures on-site to construct ar-chitectural elements (Helm, 2015) and to generateroboticmotion control based on gestures and sensordata (Bard et al., 2014). Our developed control strate-gies enabled close collaborationbetween the humanand the robot through the interaction with buildingelements. Users were able to hand over building el-ements to the robot or guide the robot through theinteraction with building elements.

Interaction - RESPONSIVE ENVIRONMENTS - Volume 2 - eCAADe 37 / SIGraDi 23 | 711

Page 8: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

Figure 15Handover task of anobject based on thetwo modalities thatare combined in ourdeveloped leakyintegrator.

Figure 16Grasping a stickwith the robotcontroller in thehandover task. Thegripper closingsignal was activatedonly if the objectwas slipping andtouching thecontact medium ofthe sensor at thesame time.

Figure 17The sequence ofinsertion of avertical elementunderneath adeformed slab.

Figure 18The deformedhorizontal surfacewithout anysupport (a) and avertical elementinserted below thepoint with thehighest load (b).

While our sensor integration allows for human robotinteraction it also encounters the possibility to au-tonomously cope with difficulties. The deformedslab in the assembly experiment is not treated asan error case, but a difficulty illustrating the robotcontrollers’ capabilities. An earlier proposed roboticfeedback process was used to assemble a woodenrod structures based on force torque sensors (Stumm

et al., 2017). Although the sensor was capable toidentify problems in the assembly task, it was notable to address those. Instead the robot would stopthe execution and wait for a human co-worker tosolve the issue. The developed robot controllers forour experiments are specifically designed to addresssimilar problems.

Limitations of our research were the scale of therobots and the assembly model. The experimentswere conductedunder lab like conditions andnegatethat fact that real construction sites are more clut-tered and dirtier. The usage of the FingerVision sen-sor under such conditions still has to be tested. Therobot control strategies imply an autonomous robot,in our experiments a human was choosing when tostart which action.

Forthcoming publications will focus on scalabil-ity of our approach and investigate more complexrobotic assembly strategies. One focus will be put onan autonomous choice of action as well as articulat-ing the building elements according to our proposalof editability.

5 CONCLUSIONThe presented series of experiments demonstratethe potential of using robot controllers for a tactilesensor to enable force-based positioning of buildingelements.

The assembly experiment shows possible imple-mentations for the technology and the frameworkin construction. Moreover, considering the set ofmodalities offered by the sensor one can start toimagine a new type of interface for human-machineinteraction, as the robotic controllers do not mimicsimple switches to trigger action. Instead they offerthe possibility to read intentions of users and act ac-cordingly.

Robot-augmented editability of built structuresis something yet to be achieved in architecture. Weproposed a novel sensor technology to offer the pos-sibility for anyone to interact with built structuresthrough robotic augmentation.

712 | eCAADe 37 / SIGraDi 23 - Interaction - RESPONSIVE ENVIRONMENTS - Volume 2

Page 9: InteractiveStructure - TU Darmstadt · InteractiveStructure RoboticRepositioningofVerticalElementsinMan-MachineCollaborative AssemblythroughVision-BasedTactileSensing BastianWibranek1,BorisBelousov2

REFERENCESBard, J, Gannon, M, Jacobson-Weaver, Z, Contreras, M,

Jeffers, M and Smith, B 2014 ’Seeing Is Doing: Syn-thetic Tools for Robotically Augmented Fabricationin High-skill Domains’, ACADIA 14: Design Agency[Proceedings of the 34th Annual Conference of the As-sociation for Computer Aided Design in Architecture(ACADIA)]

Bohg, J, Morales, A, Asfour, T and Kragic, D 2014, ’Data-drivengrasp synthesis-A survey’, IEEETransactionsonRobotics, 30, pp. 289-309

Chebotar, Y, Kroemer, O and Peters, J 2014 ’Learningrobot tactile sensing for object manipulation’, IEEEInternational Conference on Intelligent Robots andSystems

Cutkosky, MR, Howe, RD and Provancher, WR 2008,’Force and Tactile Sensors’, in Siciliano, B andKhatib, O (eds) 2008, Springer Handbook of Robotics,Springer Berlin Heidelberg, Berlin, Heidelberg

Gramazio, F and Kohler, M 2008, ’Authoring Robotic’, Ar-chitectural Design, 3(84), pp. 14-22

Helm, V 2014, In-situ-Fabrikation - Neue Potenzialeroboterbasierter Bauprozesse auf der Baustelle, Ph.D.Thesis, Kunsthochschule f”ur Medien K”oln

Lepora, N, Martinez-Hernandez, U and Prescott, T 2016’Active Bayesian Perception for Simultaneous Ob-ject Localization and Identification’, Robotics: Sci-ence and Systems

Menges, A 2015, ’The new cyber-physical making in ar-chitecture: Computational construction’, Architec-tural Design, 85(5), pp. 28-33

Ramón, JAC, Medina, FT and Perdereau, V 2013, ’Fin-ger readjustment algorithm for objectmanipulationbasedon tactile information’, International Journal ofAdvanced Robotic Systems, 10, p. 9

Schaal, S 2006, The SL Simulation and Real-Time ControlSoftware Package, Ph.D. Thesis, University of South-ern California

Stumm, S, Braumann, J, VonHilchen,MandBrell-Cokcan,S 2017 ’On-site robotic construction assistance forassembly using A-Priori knowledge and human-robot collaboration’, Advances in Intelligent Systemsand Computing

Wibranek, B, Belousov, B, Sadybakasov, A and Tessmann,O 2019 ’Man-Machine Collaboration through Build-ing Components for As-Built Digital Models’, CAADFutures proceeding

Yamaguchi, A and Atkeson, CG 2017 ’Implementing tac-tile behaviors using FingerVision’, IEEE-RAS Interna-tional Conference on Humanoid Robots

Interaction - RESPONSIVE ENVIRONMENTS - Volume 2 - eCAADe 37 / SIGraDi 23 | 713