In this tutorial we’ll look at the different ways that we can create Targets in the HAL Robotics Framework for Grasshopper.
Targets are the way we define to where a Robot should move. They do not, in and of themselves, define how a Robot should move so we can mix and match Target creation techniques with different types of Move.
There are 2 main ways of creating Targets, both of which are available from Target component in the HAL Robotics tab, Motion panel.
The first way of creating a Target is a Cartesian Target from a Frame. When a Robot moves to a Cartesian Target the active TCP of our Robot will align with the Target’s position and orientation. If we create a Frame in Grasshopper, such as by selecting a point and making it the origin of an XY Plane. When we assign this to our Target component and hide the previous components, we can see our Target axes centred on the point we had selected. Of course, because this is Grasshopper, if we move that point our Target moves with it. We can also set the Reference here. Please take a look at the References tutorial to see how those work.
The Z axis of this Target is pointing up. In our Tool creation tutorial, we recommended that the Z axis of Tool TCPs point out of the Tool, following the co-ordinate system flow of the Robot itself. That means that when our Robot reaches the Target we’ve just created, the Tool will also be pointing up. That may be desirable but remember that setting the orientation of your Targets is just as important as their positions and therefore creating the correct Frame is critical. We have found a number of cases where creating Targets facing directly downwards, with their X axes towards world -X is a useful default and have added a shortcut to create those by-passing points directly to a Target parameter.
The other primary way of creating a Target is in Joint space, that is by defining the desired position of each active Joint of your Robot. We can do this by changing the template of our Target component by right-clicking and selecting From Joint Positions. The inputs are now slightly different. We need to pass a Mechanism into the component to visualize the Robot in its final position and ensure that the Positions we give are valid. The other required input is a list of the Positions of each active Robot Joint. It’s important to note that unlike many other inputs in the HAL Robotics Framework these Positions must be defined in SI units (metres and radians) because they can legitimately contain both lengths and angles. If we create a few sliders, six for a six-axis Robot, merge into a single list and ensure that we’re in SI units we can visualize the final position of our Robot at these Joint positions.
Using these two Target creation methods we can get our Robots to perform any motion we require. That being said, particularly in a Grasshopper-centric workflow, we often want to follow a Curve as we did in the Getting Started tutorial. To facilitate that we have included a From Curve template in the Target component. This variation of the component takes a Curve and creates Targets to approximate the Curve. We do this by subdividing (or discretizing) it. The Discretization can be controlled using the input and changed between straight line segments only or line segments and arcs. The accuracy of the approximation can be controlled using this Tolerance input. The distance given here is the maximum allowed deviation from the input Curve.
In this tutorial we’ll look at the different utilities to modify Targets built in to the HAL Robotics Framework for Grasshopper.
Targets are the way we define to where a Robot should move and defining them correctly is a fundamental step in the programming of a Procedure. To facilitate certain recurring cases, we provide inbuilt Target modifiers.
The Transform Target component from the HAL Robotics tab, Motion panel offers several different ways of realigning your Targets to face vectors, curve tangents, mesh or surface normal, or any other direction you choose with a Free Transformation. To start let’s stick with the Parallel to Vector template and try and get all of our Targets to face the base of our Robot which happens to be at the world origin. This is often useful if your Tool is free to rotate around its Z axis and can help to avoid reachability issues. We can use the Target Properties component to get the location of our Targets, create an XY plane to represent the Robot base frame and Vector from 2 Points component to find the vector between our Targets and the origin. We can use our new vector as the Direction input in our Target modifier and pass our Targets to their input. Ensure the original Targets are hidden to make it easier to see our results. We should see that our Targets are all pointing towards the origin but not necessarily in the way we were expecting. That is because the Axis defaults to
Z. If we change this to
X then we should see something closer to what we want. We can also Flip the vectors so that our Targets face the opposite direction. If we don’t want our Targets to all be facing down towards the origin as they are now, the we can discard the Z component of our input vector to keep our Targets horizontal.
Most variations of the Transform Target component work in a similar way so please play with those to discover what they can do. The one exception is the Free Transform template. This allows us to apply any transformation we want to our Targets by simply specifying translations and reorientations. The default Reference for this transformation is the Target itself but we can specify a Plane as the Reference to change the way our Targets are transformed.
The last Target modifier we’re going to look at in this tutorial is the Target Filter component from the HAL Robotics tab, Motion panel. To demonstrate this functionality, we can divide a curve into a large number of Targets. After a certain point adding more Targets is unnecessary, slows down code export and, in some circumstances, code execution. The Target Filter component takes our Targets and splits them into two lists, those that meet the Position and Orientation tolerances (Remaining) and those that don’t (Discarded). It is therefore useful to hide the component output and display only the Remaining Targets. If we filter the targets to the nearest centimeter, we should see that far fewer remain and by changing the Position tolerance the number of Remaining Targets will vary accordingly.
Motion Settings control the way in which a Robot moves between Targets. They combine settings for the Space, Speeds, Accelerations, Blends and a number of other parameters to control how a Robot gets to its destination.
The Motion Settings component can be found in the HAL Robotics tab, Motion panel and can be directly passed in to the Move component. The four settings mentioned previously are the first inputs on this component.
Space controls which path the Robot takes to a Target. In
Cartesian mode the TCP moves in a very controlled manner along a straight line or arc. This is probably the easier motion type to visualize but can cause problems when moving between configurations or when trying to optimise cycle times. Moving in Joint space means that each Joint will move from one position to the next without consideration for the position of the TCP. Joint space Moves always end in the same configuration and are not liable to Singularities. It’s often useful to start your Procedures with a Motion in Joint space to ensure your Robot is always initialized to a known position and configuration. It’s worth noting that when using Joint space Motions your Toolpath will be dotted until the Procedure is Solved because we can’t know ahead of time exactly where the TCP will go during that Motion. Once Solved, you will see the path your TCP will actually take in space.
Speed settings, as the name implies, constrain the speed of your Robot. They can be declared in Cartesian space to directly limit the position or orientation Speed of the TCP. You can also constrain the Speeds of your Robot’s Joints using the second overload or combine the two using the third overload. Please note that not all Robot manufacturers support programmable Joint speed constraints so there may be variations between your simulation and real Robot when they are used.
Blends sometimes called zones or approximations change how close the Robot needs to get to a Target before moving on to the next. It’s useful to consider your Blends carefully because increasing their size can drastically improve cycle time by allowing the Robot to maintain speed instead of coming to a stop at each Target. Blends are most easily visualized in Position. If we set a 100 mm radius Blend, we can see circles appear around each Target. These indicate that the Robot will exactly follow our Toolpath until it gets within 100 mm of the Target, at which point it will start to deviate within that circle to keep its speed up and head towards the subsequent Target. It will exactly follow our Toolpath again when it leaves the circle. When we solve our Procedure, we can see the path our TCP will actually take getting close but not actually to all of our Targets.
In this tutorial we’ll see how to combine different Procedures to chain sequences using the HAL Robotics Framework for Grasshopper.
Procedures are sequences of atomic Actions, such as Move, Wait or Change Signal State. Each of these are created individually but need to be combined to be executed one after the other by our Robot. Procedures can also be created as reusable sequences of Actions for example moving to a home position and opening a gripper.
To combine multiple Procedures, we can use the Combine Actions component from the HAL Robotics tab, Procedure panel. This component allows us to name our new Procedure with the Alias input. This is extremely useful for identifying your Procedure later, particularly when using it more than once. The only mandatory input for this component is the list of Procedures and Actions to be Combined. In Grasshopper we can pass any number of wires into the same input and the Combine Actions component will create a Procedure for each branch of items it gets. However, to ensure that we keep a clean document and an easy means of changing the order of Procedures it is recommended to use something like a Merge component and flattening all the inputs. Once those are Combined, we will have single Procedure that executes each of our sub-Procedures one after the other.
Once a Procedure has been assigned to a Controller and Solved it is useful to see how a Simulation is progressing through that Procedure so we can see where any issues may lie or which phases might be taking longer than we expect. We can do that using the Procedure Browser. To access the Procedure Browser, we need to ensure that we have an Execution Control connected to a complete Execute component. Once that’s in place we can double-click on the Execution Control to open the Procedure Browser. In this window we can see our execution controls, reset, play/pause, next, previous and loop as well as all of our actions. Alongside that we have a time slider that allows you to speed up or slow down the Simulation of your Procedures without affecting your program itself. The rest of the Procedure Browser window shows the Procedure that you are executing and the progress of each Action within it. This Procedure Browser view also serves to demonstrate the purpose of the Compact input on our Combine Procedure component. By default, Compact is set to
true. This compacts all of the incoming Procedures and creates a single, flat list of Actions. If, however, we toggle Compact to
false we see that all of our previous Procedures are maintained in the hierarchy and can be collapsed or expanded to view their contents. The hierarchical, un-compacted mode can be particularly useful if you reuse sub-Procedures.
In this tutorial we’ll see how to synchronize the motion of multiple Mechanisms using the HAL Robotics Framework for Grasshopper.
When we have multiple Robots or Mechanisms, such as Positioners, in a Cell it may be necessary for them to execute Motion synchronously. This could be in scenarios such as two Robots sharing a load between them or a Positioner reorientating a Part whilst a Robot works on it.
In order to Synchronize Motions, we need to ensure we have multiple Procedures to work with and we always use one Procedure per Mechanism we want to program, whether it’s a Robot, Track or Positioner. A setup for this could be as simple as having two Robots each moving to a single Target in Joint space. To make this a little more demonstrative it would be preferable if the Motions are dissimilar, for example one being long and the short or one fast and the other slow. To Synchronize the Motions, we need to assign them Sync Settings. The Sync Settings component can be found in the HAL Robotics tab, Motion panel. We should assign a unique name, using the Alias input, to the Sync Settings to ensure that they are easily identifiable later. Once those Sync Settings have been created, they need to be assigned to both of our Moves. It is important to note that it must be the exact same Sync Settings passed to both. Your Sync Settings must only be used for one synchronous sequence of Motions per Procedure, and synchronous sequences must contain the same number of Actions in each Procedure in which they’re used. We can now Solve and see that the duration of our Moves has been adjusted so that they both take the same amount of time. Also critically important in Synchronization, is the fact that all the motions start at the same time. We can test this out be adding a Move to one of the Procedures prior to the Synchronous Moves. When this is re-Solved, we can see that the second Robot implicitly waits for the first Robot’s Move to finish before they both start their Synchronous Moves.
In this tutorial we’ll see how to simplify the programming of multi-Mechanism setups using Target Resolvers in the HAL Robotics Framework for Grasshopper.
When we have multiple Robots or Mechanisms, such as Positioners, in a Cell programming can become increasingly complex and, in many scenarios, we only really have one set of Targets that we care about which a Positioner should be relocating to facilitate access by a Manipulator. We refer to this configuration, where one Mechanism displaces the Targets of another as Coupled motion.
As an example of this we can have a Robot drawing a straight line between two Targets that are referenced on a rotary Positioner. In order to setup Coupled motion we need to add some settings to all the Motion Settings that will be used in that Coupled configuration. We can create Kinematic Settings from the HAL Robotics tab, Motion panel and because we’re dealing with multiple Mechanisms we are going to change to the Composite Kinematic Settings template. We are now asked to input settings for the Primary and Secondary Mechanisms. The former is typically the Mechanism that is moving the Targets around, historically termed “Master”, and the latter is the Mechanism moving to those Targets, historically termed “Slave”. In our case the Targets are referenced to the Positioner and the Robot is following those Targets around. We don’t need to assign any additional Kinematic Settings to our Mechanisms so we can simply chain our Mechanisms into simple Kinematic Settings and into their positions in the Composite Kinematic Settings. These Composite Kinematic Settings can now be added to our Motion Settings for the Coupled Motions. We have two separate sets of Motion Settings here; one is for the Coupled Motion and the other is for an asynchronous initialization Motion for each of the Mechanisms.
With our settings in place we can now look at programming the Positioner. We could calculate the Targets for the Positioner and set them explicitly but when we’re in a scenario like this welding example we can set some rules for the Positioner to follow. We do this using the Target Resolvers from the HAL Robotics tab, Motion panel. There are a few different templates to explore but, in our case, the first is the one we want. The Vector Aligned Target Resolver tells the positioner to point the given Axis of our Targets towards a particular direction. If we can it’s normally preferable to weld with Gravity so we’re going to ask the Positioner to point the Z axis of our Targets straight down. The Target Resolver can be used in a Move just like a Target provided it is duplicated to match the number of “secondary” Targets. To make that task easier we have included a template in Move called Synchronized which takes in a Procedure and a Target, or Target Resolver, and will create all of the necessary Moves for you with the correct synchronization settings to match the input Procedure. Synchronized Move creates a Procedure as an output like any other Move and so it can be merged and Combined as we would normally with any other Move. With both of our Procedures now complete we can Solve and Simulate to see our Positioner aligning itself automatically to best present the Targets to the Robot.
There are scenarios in which a single Robot may have access to multiple Tools and the ability to change which Tool is in use at runtime. This could be because, either, the Tool itself has multiple Endpoints or because automatic Tool changing equipment is available in the Cell.
In preparation for this tutorial a number of things have been put in place. Firstly, three Tools have been created; a simple cone, an interchange plate and a double cone with 2 distinct Endpoints. Secondly, these Tools have each been positioned in front of the Robot in a known location. And finally, a Toolpath has been created to go to each of the Tool picking positions with a jump in between. I have also prepared the standard Combine, Solve, and Execute flow.
The focus of this tutorial will be on the Change Tool component which can be found in the HAL Robotics tab under Procedure. Each version of this component will give us a Procedure that we can Combine with Moves and Waits as we have done countless times before. The Change Tool component has 3 templates and we’ll cover them all, starting with Detach Tool. We’re going to use this to remove the
Cone Tool that we’ve initially got attached to the Robot. We want to ensure the Mechanism is the combined Mechanism and we can specify the Tool as the
Cone Tool. The Tool input is actually optional and the currently active Tool will be removed if none is specified. We can weave this into our main Procedure and if you Solve and Execute, you’ll see the Cone disappear when we hit the right Target. The
Cone is actually in the exact position we left it but it is no longer displayed because it’s not part of our Robot. We can use the Environment input on Execute to force the display of mobile, non-mechanism Parts. If we now Execute, we should see the
Cone hang in space where we detached it.
From here we’re going to attach two Tools to the Robot in succession. The first is going to be the
Interface which acts as something of a Tool Changer. Using the Change Tool component and the Attach Tool template we can set the combined Mechanism as the Mechanism again and the
Interface as the Tool. Merging this into our Procedure will attach the
Interface to our Robot and we can visualize the Parts before they are attached using the same technique as for the
Cone, passing the Parts into Environment on Execute. If we repeat this process and attach the
MultiTool this time, you should see that the
MultiTool gets connected to the Active Endpoint of the Robot, which in this case is the end of the
Interface. This behavior may not always be desirable e.g. stationary tools, and can be modified in the overload of Attach Tool.
In this final combination of Tools attached to the Robot we have two distinct potential Endpoints. The final template of the Change Tool component allows us to set which Endpoint, or Tool if you have multiple distinct Tools attached, is currently Active. We do this by specifying, once again, the combined Mechanism, and the Connection that we want to use as the Active Endpoint. To ensure consistent and deterministic output, I would recommend doing this immediately after attaching the
MultiTool as well as when you may wish to switch between the two Endpoints. With that merged and our Tool Parts in the Environment we can see everything run.
In this tutorial we’ll see how the previous tutorials on synchronization and Target Resolvers can be used together to program a Track, or Linear Positioner, using the HAL Robotics Framework for Grasshopper.
Mounting a Robot on a Track, or linear axis Positioner, can massively open up the usable space in a Cell. However, programming one Mechanism whilst it’s mounted on another can introduce a few complexities.
As per usual, we’re going to start this session by modelling our Cell. This means picking our Robot, Attaching a Tool, and importing our Positioner. This is where things start deviating slightly from our synchronization tutorial. In this instance we actually want to mount one of our Mechanisms on another. The HAL Robotics Framework doesn’t really make a distinction between Mechanism types e.g. Positioner, Robot or Tool, so we can use the exact same strategy as Attaching our Tool to the Robot. We’ll use the Attach component with the Track as the Parent and the Robot + Tool combination as the Child. Ensure that IsEndEffector is left as
true because our Child contains our desired End Effector. We can use the Location and InWorld parameters to adjust the position and orientation of the Robot on the Track. This will create a single Mechanism that we can program as we would any other Mechanism, however, this monolithic approach doesn’t give us as much freedom as treating this like a multi-Mechanism setup does. N.B. If you do use the single Mechanism approach, ensure any Joint space Targets are a) in SI units for the relevant joints, and b) are in the right order i.e. with the Track first in this case (Parent joints followed by Child joints). To return to a multi-Mechanism scenario we can use the Disassemble component from HAL Robotics -> Cell. This will split our Mechanism into its constituent parts including its SubMechanisms, that is to say, the Mechanisms which make it up. We can now treat the SubMechanisms as we did our Mechanisms in previous tutorials.
There are a few subtleties to programming a Track so let’s walk through an example. Let’s start by preparing a simple curve following Procedure for the Robot as we did in the Getting Started tutorial. Ensure the Track is actually required by making this curve longer than the Robot’s reach. We can then program the Track using Targets as we do for any other Mechanism for maximum control, or using the Target Resolvers seen in a previous tutorial for a quick but effective approach. For a Track the Offset Target Resolver overload is of particular use. The default version of this component asks simply for an Offset distance which is the distance the Track’s Endpoint (and by extension the Robot’s base) should be kept from the Robot’s Target. Setting the Offset to
0 or any value less than the distance between the Track’s Endpoint and Target will cause the Track to get as close to the Target as possible. To create a full Procedure for the Track we need to set some Sync Settings for the Robot’s Move and can then use the Synchronize utility overload of the Move component to synchronize our Target Resolver with the full Robot Procedure (see the Synchronize Motion tutorial for a refresher on how to do this). As one Mechanism is moving another, you will also need to ensure that the Kinematic Settings are in place for this setup, with the Track as the Primary and Robot as a Secondary, in both the Robot’s Move and the Track’s (see the Coupled Motion and Resolving Targets tutorial as a reminder if needed). With this in place we are in a position to Solve and Execute and we should see both Mechanisms moving as we expect.
Although exporting is covered in a later tutorial, there are a couple of things that need to be setup for external axes that are worth looking at here if your Positioner is an external axis and programmed within the same exported Procedure as your Robot.
6 (0-based, so 7th when exported) or higher depending on the exact configuration of our real Cell.