Classical Robotics

Know your enemy […] - Sun Tzu

In this section, we build a foundation on classical robotics which later you’ll use to understand why learning-based methods are so powerful.

We start with how motion is generated, then look at common robot types of motion, and finally work through a concrete kinematics example before discussing limitations.

Key Takeaway

Learning-based approaches to robotics are motivated by fundamental needs that traditional dynamics-based techniques have historically overlooked.

Modern robotics requires methods that can generalize across different tasks and robot embodiments, allowing one approach to work effectively across many different problems rather than requiring custom solutions for each scenario. There’s also a growing need to reduce dependency on human expertise, enabling systems to learn appropriate behaviors from data rather than requiring experts to hand-craft rules and models for every situation. Finally, the field needs approaches that can leverage the rapidly growing availability of robotics datasets, taking advantage of the collective knowledge captured in these large-scale data collections.

Explicit and Implicit Models

We begin with the high-level picture: how do different approaches generate motion?

A diagram showing different approaches to robot motion generation, organized into two main categories: explicit dynamics-based methods on the left (including classical control, model predictive control, and trajectory optimization) and implicit learning-based methods on the right (including reinforcement learning, imitation learning, and neural networks). The diagram illustrates the spectrum from model-based to data-driven approaches in robotics.

Overview of methods to generate motion (clearly non-exhaustive). The different methods can be grouped based on whether they explicitly (dynamics-based) or implicitly (learning-based) model robot-environment interactions.

Robotics is concerned with producing artificial motion in the physical world in useful, reliable and safe fashion.

Thus, robotics is an inherently multi‑disciplinary domain: producing autonomous motion in the physical world requires, at the very least, interfacing different software (motion planners) and hardware (motion executioners) components.

Further, knowledge of mechanical, electrical, and software engineering, as well as rigid‑body mechanics and control theory have proven quintessential in robotics since the field first developed in the 1950s. More recently, Machine Learning (ML) has also proved effective in robotics, complementing these more traditional disciplines.

As a direct consequence of its multi‑disciplinary nature, robotics has developed as a wide array of methods, all concerned with the main purpose of producing artificial motion in the physical world.

Methods to produce robotics motion range from traditional explicit models—dynamics‑based methods, leveraging precise descriptions of the mechanics of robots’ rigid bodies and their interactions with eventual obstacles in the environment—to implicit models—learning‑based methods, treating artificial motion as a statistical pattern to learn given multiple sensorimotor readings.

A variety of methods have been developed between these two extrema. The figure above illustrates some of the most relevant techniques.

In this section, our goal is to introduce where classical methods excel, where they struggle, and why learning‑based approaches are helpful.

Explicit vs Implicit Models:

Explicit (dynamics-based) approaches rely on hand-crafted mathematical models of physics and require deep domain expertise to implement effectively. These methods work exceptionally well for well-understood, controlled scenarios where the physics can be precisely modeled. Classic examples include PID controllers and Model Predictive Control systems that have been the backbone of industrial robotics for decades.

Implicit (learning-based) approaches take a fundamentally different strategy by learning patterns directly from data rather than requiring explicit mathematical models. These methods require less domain-specific engineering and can adapt to complex, uncertain environments that would be difficult to model analytically. Neural networks and reinforcement learning algorithms are prime examples of this approach.

Hybrid approaches represent an exciting middle ground, combining the reliability of physics knowledge with the adaptability of learning systems. These methods use physics knowledge to guide and constrain the learning process, often achieving better performance than either approach alone.

Different Types of Motion

Now that we have the big picture, we can situate the problem: what kinds of motion do robots typically perform?

A collection of six different robotic platforms showing the diversity of robot designs: ViperX (a small desktop robotic arm), SO-100 (an open-source 3D-printable arm), Boston Dynamics' Spot (a four-legged walking robot), Open-Duck (a wheeled mobile robot), 1X's NEO (a humanoid robot), and Boston Dynamics' Atlas (an advanced bipedal humanoid robot). The image demonstrates how different robot designs are optimized for different types of motion and tasks.

Different kinds of motions are achieved with potentially very different robotic platforms. From left to right, top to bottom: ViperX, SO-100, Boston Dynamics’ Spot, Open-Duck, 1X’s NEO, Boston Dynamics’ Atlas. This is an example list of robotic platforms and is (very) far from being exhaustive.

At a high level, most systems you’ll encounter fall into three categories. Knowing which bucket you’re in helps you choose models, datasets, and controllers appropriately.

In the vast majority of instances, robotics deals with producing motion via actuating joints connecting nearly entirely-rigid links. A key distinction between focus areas in robotics is based on whether the generated motion modifies the absolute state of the environment through dexterous interactions, changes the relative state of the robot with respect to its environment through mobility, or combines both capabilities.

Manipulation involves generating motion to perform actions that induce desirable modifications in the environment. These effects are typically achieved through the robot - for example, a robotic arm grasping objects, assembling components, or using tools. The robot changes the world around it while remaining in a fixed location.

Locomotion encompasses motions that result in changes to the robot’s physical location within its environment. This general category includes both wheeled locomotion (like mobile bases and autonomous vehicles) and legged locomotion (like walking robots and quadrupeds), depending on the mechanism the robot uses to move through its environment.

Mobile manipulation represents a more complex category that combines both manipulation and locomotion capabilities. These systems can interact with and move within their environment, enabling much more dynamic and versatile robot-environment interactions. However, this versatility comes at the cost of significantly increased complexity, as mobile manipulation systems must coordinate a much larger set of control variables compared to either pure locomotion or manipulation systems.

Quick classifier: ask “what changes?” If mainly the world changes (object pose/state), you’re in manipulation. If mainly the robot pose changes, you’re in locomotion. If both change meaningfully within the task, you’re in mobile manipulation. This simple test helps when designing observations, actions, and evaluation.

We’ll reuse this taxonomy when discussing datasets (what sensors you need) and policies (what action spaces you predict) in the next sections.

Example: Planar Manipulation

Let’s ground the ideas with a concrete, minimal example you can reason about step by step.

Robot manipulators typically consist of a series of links and joints, articulated in a chain finally connected to an end-effector. Actuated joints are considered responsible for generating motion of the links, while the end effector is instead used to perform specific actions at the target location (e.g., grasping/releasing objects via closing/opening a gripper end-effector, using a specialized tool like a screwdriver, etc.).

Recently, the development of low-cost manipulators like the ALOHA, ALOHA-2 and SO-100/SO-101 platforms significantly lowered the barrier to entry to robotics, considering the increased accessibility of these robots compared to more traditional platforms like the Franka Emika Panda arm.

Robot Cost Comparison

Cheaper, more accessible robots are starting to rival traditional platforms like the Panda arm platforms in adoption in resource-constrained scenarios. The SO-100, in particular, has a cost in the 100s of Euros, and can be entirely 3D-printed in hours, while the industrially-manufactured Panda arm costs tens of thousands of Euros and is not openly available.

Forward and Inverse Kinematics

SO-100 to Planar Manipulator

The SO-100 arm is a 6‑DoF manipulator arm. Preventing some of its joints (shoulder pan, wrist flex and wrist roll) from actuating, it can be represented as a traditional 2‑DoF planar manipulator (the gripper joint in the end‑effector is not counted toward the degrees of freedom used to produce motion).

Consider the (simple) case where a SO‑100 is restrained from actuating (1) the shoulder pan and (2) the wrist flex and roll motors. This reduces the degrees of freedom of the SO‑100 from the original 5+1 (5 joints + 1 gripper) to 2+1 (shoulder lift, elbow flex + gripper).

Let us make the simplifying assumption that actuators can produce rotations up to $2\pi$ radians. All these simplifying assumptions leave us with the planar manipulator where we can control the angles $\theta_1$ and $\theta_2$, jointly referred to as the robot’s configuration, and indicated with $q = [\theta_1, \theta_2 ] \in [-\pi, +\pi]^2$.

Free Motion

Free to move

Floor Constraint

Constrained by the surface

Multiple Constraints

Constrained by surface and (fixed) obstacle

Considering this example, we can analytically write the end-effector’s position $p \in \mathbb{R}^2$ as a function of the robot’s configuration, $p = p(q)$: p(q)=(lcos(θ1)+lcos(θ1+θ2)lsin(θ1)+lsin(θ1+θ2))p(q) = \begin{pmatrix} l \cos(\theta_1) + l \cos(\theta_1 + \theta_2) \\ l \sin(\theta_1) + l \sin(\theta_1 + \theta_2) \end{pmatrix}

Forward Kinematics (FK) maps a robot configuration into the corresponding end-effector pose, whereas Inverse Kinematics (IK) is used to reconstruct the configuration(s) given an end-effector pose.

In the simplified case here considered, one can solve the problem of controlling the end-effector’s location to reach a goal position $p^$ by solving analytically for $q: p(q) = p^$. However, in the general case, one might not be able to solve this problem analytically, and can typically resort to iterative optimization methods: minqQp(q)p22\min_{q \in \mathcal{Q}} \|p(q) - p^*\|_2^2

Exact analytical solutions to IK are even less appealing when one considers the presence of obstacles in the robot’s workspace, resulting in constraints on the possible values of $q$.

If the algebra feels dense, focus on the mapping: FK answers “where is the hand given the joints?”, IK asks “what joints reach that hand position?”. The rest of the unit shows why the IK direction becomes hard in realistic settings.

Differential Inverse Kinematics

When IK is hard to solve directly, we can often make progress by working with small motions (velocities) instead of absolute positions.

Let $J(q)$ denote the Jacobian matrix of (partial) derivatives of the FK-function. Then, one can apply the chain rule to any $p(q)$, deriving $\dot{p} = J(q) \dot{q}$, and thus finally relating variations in the robot configurations to variations in pose.

Given a desired end-effector trajectory, differential IK finds $\dot{q}(t)$ solving for joints’ velocities instead of configurations: q˙(t)=argminνJ(q(t))νp˙(t)22\dot{q}(t) = \arg\min_\nu \|J(q(t)) \nu - \dot{p}^*(t)\|_2^2

This often admits the closed-form solution $\dot{q} = J(q)^+ \dot{p}^*$, where $J^+(q)$ denotes the Moore-Penrose pseudo-inverse of $J(q)$.

Adding Feedback Loops

Moving Obstacle

Planar manipulator robot in the presence of a moving obstacle.

While very effective when a goal trajectory has been well specified, the performance of differential IK can degrade significantly in the presence of modeling/tracking errors, or in the presence of non-modeled dynamics in the environment.

To mitigate the effect of modeling errors, sensing noise and other disturbances, classical pipelines indeed do augment differential IK with feedback control looping back quantities of interest. In practice, following a trajectory with a closed feedback loop might consist in backwarding the error between the target and measured pose, $\Delta p = p^- p(q)$, hereby modifying the control applied to $\dot{q} = J(q)^+ (\dot{p}^ + k_p \Delta p)$, with $k_p$ defined as the (proportional) gain.

More advanced techniques for control consisting in feedback linearization, PID control, Linear Quadratic Regulator (LQR) or Model-Predictive Control (MPC) can be employed to stabilize tracking and reject moderate perturbations.

Limitations of Dynamics-based Robotics

This brings us to the “so what?”: where do these classical tools struggle in practice, and why does that motivate learning?

Despite the last 60+ years of robotics research, autonomous robots are still largely incapable of performing tasks at human-level performance in the physical world generalizing across (1) robot embodiments (different manipulators, different locomotion platforms, etc.) and (2) tasks (tying shoe-laces, manipulating a diverse set of objects).

Classical Limitations

Dynamics-based approaches to robotics suffer from several limitations: (1) orchestrating multiple components poses integration challenges; (2) the need to develop custom processing pipelines for the sensing modalities and tasks considered hinders scalability; (3) simplified analytical models of physical phenomena limit real-world performance. Lastly, (4) dynamics-based methods overlook trends in the availability and growth of robotics data.

Key Limitations

1. Integration Challenges Dynamics-based robotics pipelines have historically been developed sequentially, engineering the different blocks now within most architectures for specific purposes. That is, sensing, state estimation, mapping, planning, (diff-)IK, and low-level control have been traditionally developed as distinct modules with fixed interfaces. Pipelining these specific modules proved error-prone, and brittleness emerges—alongside compounding errors—whenever changes incur.

2. Limited Scalability Classical planners operate on compact, assumed-sufficient state representations; extending them to reason directly over raw, heterogeneous and noisy data streams is non-trivial. This results in a limited scalability to multimodal data and multitask settings, as incorporating high-dimensional perceptual inputs (RGB, depth, tactile, audio) traditionally required extensive engineering efforts to extract meaningful features for control.

3. Modeling Limitations Setting aside integration and scalability challenges: developing accurate modeling of contact, friction, and compliance for complicated systems remains difficult. Rigid-body approximations are often insufficient in the presence of deformable objects, and relying on approximated models hinders real-world applicability of the methods developed.

4. Overlooking Data Trends Lastly, dynamics-based methods (naturally) overlook the rather recent increase in availability of openly-available robotics datasets. The curation of academic datasets by large centralized groups of human experts in robotics is now increasingly complemented by a growing number of robotics datasets contributed in a decentralized fashion by individuals with varied expertise.

Taken together, these limitations motivate the exploration of learning-based approaches that can:

  1. Integrate perception and control more tightly
  2. Adapt across tasks and embodiments with reduced expert modeling interventions
  3. Scale gracefully in performance as more robotics data becomes available
< > Update on GitHub