[Documentation] [TitleIndex] [WordIndex

Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Discourse General: Call for Proposals: Global ROSCon 2026 in Toronto

ROSCon Global 2026 Call for Proposals Now Open!

The ROSCon call for proposals is now open! You can find full proposal details on the ROSCon 2026 website.

ROSCon Global 2026 will be held in Toronto, Canada, from September 22nd to September 24th, 2026. This year, we are officially adopting the “Global” moniker to reflect our growing international community and the many regional ROSCons happening worldwide.

Submission Deadlines

Important Dates

Diversity Scholarship Program

If you require financial assistance to attend ROSCon Global and meet the qualifications, please apply for our Diversity Scholarship Program. Thanks to our sponsors, scholarships include complimentary registration, four nights of hotel accommodation, and a travel stipend.

The deadline for the scholarship is Sun, Mar 22, 2026 12:00 AM UTC, which is well before the CFP deadlines to allow for travel planning and visa processing.

What are we looking for?

The core of ROSCon is community-contributed content. We are looking for:

We want to see your robots! Whether it is maritime robots, lunar landers, or industrial factory fleets, we want to hear the technical lessons you learned. We encourage original content, high-impact ideas, and, as always, a focus on open-source availability.

How to Prepare

If you are new to ROSCon we recommend reviewing the archive of previous talks. You are also welcome to use this Discourse thread to workshop your ideas and find collaborators.

Questions and concerns can be directed to the ROSCon Executive Committee (roscon-2026-ec@openrobotics.org) or posted in this thread. We look forward to seeing the community in Toronto!

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/call-for-proposals-global-roscon-2026-in-toronto/53171

ROS Discourse General: PlotJuggler Bridge Released

I’m happy to introduce PlotJuggler Bridge, a lightweight server that exposes ROS 2 or DDS topics over WebSocket, allowing remote tools like PlotJuggler to access telemetry without directly participating in the middleware network.

In many robotics setups, accessing telemetry from another computer is harder than it should be. DDS discovery over WiFi can be unreliable, opening DDS networks outside the robot can create configuration issues, and installing a full ROS 2 environment on every machine used for debugging is often inconvenient.

PlotJuggler Bridge solves this by acting as a gateway between the middleware network and external clients.
It runs close to the robot, reads the topic data, and exposes it through a simple WebSocket endpoint that any client can connect to.

This approach keeps the ROS/DDS network local while making telemetry easily accessible from other machines.

The project is available here:


Why it is useful

This is especially helpful in scenarios such as:

Because the bridge performs runtime schema discovery, clients can access topics even if they use custom ROS messages, without requiring those message packages to be installed on the client machine.

The bridge also aggregates and optionally compresses data, which helps reduce bandwidth usage and improves stability when streaming telemetry over wireless networks.


Main features

PlotJuggler Bridge includes several features designed for real-world robotics workflows:


How it works

ROS 2 / DDS -> PlotJuggler Bridge -> WebSocket -> PlotJuggler

The bridge subscribes to topics in the ROS/DDS network and exposes them through a WebSocket server.
External tools can connect and receive the streamed telemetry without joining the middleware network.


Quick to start

The bridge can typically be up and running in less than 5 minutes.

Setup instructions are available in the repository README:

You will need PlotJuggler 3.16 or newer, which includes the WebSocket client plugin:


Basic usage

Once the bridge is running, the workflow is straightforward:

  1. Start the bridge on the machine connected to the ROS/DDS network.
  2. Open PlotJuggler on any computer.
  3. Connect to the WebSocket Client using the bridge address.

The available topics will be discovered automatically and can be inspected immediately.


About the work

My name is Álvaro Valencia, and I am currently working on PlotJuggler as an intern while finishing the last months of my Robotics Software Engineering degree.

I collaborate closely with @facontidavide on this project. PlotJuggler clearly reflects years of work, effort and passion, and contributing to it is a great experience.

Together we are developing the components required to make this new Robot → PlotJuggler connection workflow simple and practical to use. The goal is to make remote telemetry access easier while keeping the system flexible for future extensions that will appear in upcoming PlotJuggler developments.


And stay tuned… more interesting things are coming soon for PlotJuggler.

18 posts - 6 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/plotjuggler-bridge-released/53143

ROS Discourse General: Control NERO’s 7-DoF Effortlessly with MoveIt 2 (Part I)

Let’s Explore Nero – Moveit2 Edition (Part I)

As a next-generation robot operating system, ROS2 provides powerful support for the intelligent and modular development of robotic arms. As the core motion planning framework in the ROS2 ecosystem, MoveIt2 not only inherits the mature functions of MoveIt but also achieves significant improvements in real-time performance, scalability, and industrial applicability.

Taking a 7-DoF robotic arm as an example, this document provides step-by-step instructions for configuring and generating a complete MoveIt2 package from a URDF model using the MoveIt Setup Assistant, enabling motion planning and visual control. This guide offers a clear, practical workflow for both beginners and developers looking to quickly integrate models into MoveIt2.

Abstract

Exporting MoveIt Package from URDF

Tags

ROS2, moveit2, Robotic Arm, nero

Repository

Environment

OS: Ubuntu 22.04
ROS Distro: Humble

Introduction to MoveIt2

MoveIt2 is the next-generation robotic arm motion planning and control framework developed based on the ROS2 architecture. It can be understood as a comprehensive upgrade of MoveIt in the ROS2 ecosystem. Inheriting the core capabilities of MoveIt, it has made significant improvements in real-time performance, modularity, and industrial application scenarios.

The main problems solved by MoveIt2 include:

Installing Moveit2

You can directly install using binary packages; use the following commands to install all components related to moveit:

sudo apt install ros-humble-moveit*

Downloading the URDF File

First, create a new workspace and download the URDF model:

mkdir -p ~/nero_ws/src
cd ~/nero_ws/src
git clone https://github.com/agilexrobotics/piper_ros.git -b humble_beta1
cd ..
colcon build 

After successful compilation, use the following command to view the model in rviz:

cd ~/nero_ws/src
source install/setup.bash
ros2 launch nero_description display_urdf.launch.py 

Exporting the MoveIt Package Using Setup Assistant

Launch the moveit_setup_assistant:

roslaunch moveit_setup_assistant setup_assistant.launch

Select Create New Moveit Configuration Package to create a new MoveIt package, then load the robotic arm.

Calculate the collision model; for a single arm, use the default parameters.

Skip selecting virtual joints and proceed to define planning groups. Here, we need to create two planning groups: the arm planning group and the gripper planning group. First, create the arm planning group; set Group Name to arm, use KDL for the kinematics solver, and select RRTstar for OMPL Planning.

Setting the Kin.Chain

Add the control joints for the planning group, select joint1~joint7, click >, then save.

Planning group creation completed.

Setting the Robot Pose; you can pre-set some actions for the planning group here.

Skip End Effectors and Passive Joints, and add interfaces in the URDF.

Setting the controller, here we use position_controllers.

Simulation will generate a URDF file for use in Gazebo, which includes physical properties such as joint motor attributes.

After configuration, fill in your name and email.

Set the package name, then click Generate Package to output the function package.

Launching the MoveIt Package

cd ~/nero_ws/src
source install/setup.bash
ros2 launch nero_moveit2_config demo.launch.py

After successful launch, you can drag the marker to preset the arm position, then click Plan & Execute to control the robotic arm movement.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/control-nero-s-7-dof-effortlessly-with-moveit-2-part-i/53123

ROS Discourse General: ros2_info — A fastfetch-like system info tool for ROS2

Hey Open Robotics folks :waving_hand:

So I built a small tool called ros2_info.
The idea was simple: what if fastfetch, but for your entire ROS2 environment?

One command → instant snapshot of everything happening in your ROS2 setup.

What it shows:
• ROS2 distro + whether it’s LTS or nearing EOL
• Live nodes, topics, services, and actions
• Auto-detects which DDS middleware you’re running
• All detected colcon workspaces + their build status
• Installed ROS2 packages grouped by category
• System stats (CPU, RAM, Disk)
• Pending ROS2-related apt updates
• A small web dashboard at localhost:8099

Basically the stuff I kept checking with 10 different commands… now in one place :sweat_smile:

Works across ROS2 distros: Foxy → Humble → Iron → Jazzy → Rolling

GitHub:
https://github.com/zang7777/ros2_info

Install

cd ~/ros2_ws/src
git clone https://github.com/zang7777/ros2_info.git
cd ~/ros2_ws && colcon build --symlink-install
source install/setup.bash
(best recommended) ros2 run ros2_info ros2_info --interactive
or just
ros2 run ros2_info ros2_info 

Always fun building little dev tools for the ecosystem :robot:

~"Created by roboticists, for roboticists "

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-info-a-fastfetch-like-system-info-tool-for-ros2/53105

ROS Discourse General: ROS 2 Rust Meeting: March 2026

The next ROS 2 Rust Meeting will be Mon, Mar 9, 2026 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-rust-meeting-march-2026/53066

ROS Discourse General: A Day at ROSCon Japan 2025 – What It’s Like to Attend as a Robotics Engineer

Hi everyone,

I recently had the chance to attend ROSCon Japan 2025, and it was an amazing experience meeting people from the ROS community, seeing robotics demos, and learning about the latest developments in ROS.

I made a short vlog to capture the atmosphere of the event. In the video, I shared some highlights including:

It was inspiring to see how the ROS ecosystem continues to grow and how many interesting robotics applications are being developed.

If you couldn’t attend the event or are curious about what ROSCon JP looks like, feel free to check out the video.

YouTube:

A Day at ROSCon JP 2025 | Robotics Engineer Vlog

Hope you enjoy it!

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/a-day-at-roscon-japan-2025-what-it-s-like-to-attend-as-a-robotics-engineer/53045

ROS Discourse General: LSEP: Open protocol for standardized robot-to-human state communication (light + sound + motion)

Hello ROS community,

I’d like to introduce LSEP (Light Signal Expression Protocol) — an open standard I’ve been developing for how robots communicate their internal state to nearby humans using coordinated light signals, sound, and motion cues.

The problem LSEP solves:

Every robot manufacturer currently invents their own LED patterns and sound cues. There’s no shared vocabulary. A blinking blue light could mean “charging” on one platform and “human detected” on another. With the EU AI Act (Art. 50) now requiring transparency for human-facing AI systems, the industry needs a standardized approach.

What LSEP defines:

- 6 core states: IDLE, AWARENESS, INTENT, CARE, CRITICAL, THREAT

- 3 extended states: MED_CONF, LOW_CONF, INTEGRITY (for sensor uncertainty and self-diagnostics)

- Each state maps to specific light color + pulse pattern, optional sound, and motion modifier

- State transitions driven by Time-to-Contact (TTC) physics, not heuristics

- 1.5m proximity floor: any human within 1.5m triggers minimum AWARENESS

Technical details:

- RFC style specification (v2.0)

- Machine readable JSON signal definitions

- Unity prototype (HDRP) with 74 tests, including sensor noise simulation and tracking dropouts

- MIT licensed — use it however you want

Why I’m posting here:

ROS is where robot software gets built. If LSEP is going to be useful, it needs to work in your stacks — as a ROS node, a topic publisher, or a behavior tree integration. I’m looking for:

1. Feedback on the state model — Do 9 states cover the scenarios you encounter? What’s missing?

2. Integration ideas — How would you want to consume LSEP in a ROS 2 pipeline? As a `/lsep_state` topic? A lifecycle node?

3. Real-world edge cases — What breaks first when you imagine deploying this on your robot?

Links:

- Specification + demo: [lsep.org](https://lsep.org)

- GitHub: [ GitHub - NemanjaGalic/LSEP: Open protocol for standardized human-robot communication — 9 states, 3 modalities, 1 grammar. Physics-based. EU AI Act ready. · GitHub ]( GitHub - NemanjaGalic/LSEP: Open protocol for standardized human-robot communication — 9 states, 3 modalities, 1 grammar. Physics-based. EU AI Act ready. · GitHub )

Happy to answer questions and discuss. The goal is to make this the “USB-C of robot communication” — one standard, every platform.

5 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/lsep-open-protocol-for-standardized-robot-to-human-state-communication-light-sound-motion/52997

ROS Discourse General: Rover + LiDAR perception inside a Forest3D-generated world (Gazebo Harmonic)

Rover + LiDAR inside a Forest3D-generated world (Gazebo Harmonic)

A quick demonstration of spawning a robot and running LiDAR perception inside a Forest3D-generated environment with realistic visuals, making it a solid base for mapping and navigation tasks.

:play_button: Watch on YouTube

Performance can be improved by tuning the mesh decimation level depending on your use case.

Current work: Integrating terramechanics for more realistic rover-terrain interaction,

Forest3D supports a variety of environments beyond forests, including lunar and other unstructured terrains. Feel free to reach out

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/rover-lidar-perception-inside-a-forest3d-generated-world-gazebo-harmonic/52977

ROS Discourse General: Part 2: Canonical Observability Stack Tryout | Cloud Robotics WG Meeting 2026-03-09

Please come and join us for this coming meeting at Mon, Mar 9, 2026 4:00 PM UTCMon, Mar 9, 2026 5:00 PM UTC, where we plan to continue deploying an example Canonical Observability Stack (COS) instance based on information from the tutorials and documentation. This session will pick up where the last session left off: an AWS instance hosting the COS server side, and a VirtualBox VM hosting the robot side.

Last session, we started working through the documentation for setting up both a COS server instance and a robot instance. Unfortunately, the recording cut out shortly into the meeting due to lack of disk space. After this point, we switched to hosting in AWS and were able to host a COS instance, although it was misconfigured and the robot was unable to connect. If you’re interested to watch the recorded part of the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/part-2-canonical-observability-stack-tryout-cloud-robotics-wg-meeting-2026-03-09/52967

ROS Discourse General: Pixhawk - ardusub setup ( Roll hold)

Hi all,

We’re building an ROV using Pixhawk (ArduSub) with a Raspberry Pi companion computer (ROS2 + MAVROS). The vehicle needs to attach and operate along vertical surfaces, so maintaining controlled roll while maneuvering is a core requirement.

Stack

Goal

We want joystick-based control similar to POSHOLD stability, but still allow roll control so the vehicle can move along the surface while attached.

Thanks in advance — happy to share more details about the vehicle config if helpful.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/pixhawk-ardusub-setup-roll-hold/52954

ROS Discourse General: Is there a working group for maintaining ROS 2-based robots in industry? 🤖

Hi everyone,

We’re curious — does a dedicated working group (or similar community) already exist for maintaining and operating ROS 2-based robots in industrial environments? If not, maybe it’s time to build one.

At Siemens, our ROS 2 efforts are focused on four key challenges:

We’d love to connect with the community and learn what’s already out there! :globe_showing_europe_africa:

We’re actively looking to engage with others working in this space — whether you’re building solutions, facing the same challenges, or have already found answers we haven’t discovered yet.

Here are some data points we’ve gathered so far:

Exciting tools that just dropped :hammer_and_wrench:

The community has been busy! A few noteworthy new tools:

Big shoutout to @doisyg for sharing impressive insights on how they manage upgrades across a large fleet of robots in the field! :clap:
And I am sure there is a big vast of further open source tools out there that can help all of us.

What Siemens has shared so far (all talks in English)

We’ve been open about our own challenges and learnings:

:speech_balloon: Our concrete question to you:

Would you be interested in joining a regular working group to discuss these topics and align our open-source efforts?

Vote below — even a single click tells us a lot! :backhand_index_pointing_down:

Click to view the poll.

Let’s build in the open — together! :handshake:

We’re strong believers in open collaboration. Whether you’re a researcher, developer, or industry practitioner — let’s align our efforts and avoid reinventing the wheel.

A few things we’d especially love to hear about:

Cheers from Germany :clinking_beer_mugs:
Florian


Update as of Tue, Mar 3, 2026 11:00 PM UTC

Let’s try to ball point where a potential virtual meeting could already happen:
(Please also vote if the day does not fit, right now I am more interested in finding the right time of day)

Click to view the poll.

27 posts - 11 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/is-there-a-working-group-for-maintaining-ros-2-based-robots-in-industry/52899

ROS Discourse General: ROS Meetup Medellín Colombia - 29-30 Apr 2026

We are pleased to officially announce ROS Meetup Medellín 2026, a space designed to bring together the robotics, ROS, and autonomous systems community in Colombia.

:round_pushpin: April 29 – Universidad EIA (Poster Session)
:round_pushpin: April 30 – Parque Explora (Talk Session)

Medellín, recognized for its strong innovation and technology ecosystem, will be the perfect setting to connect academia, industry, and the open-source community around ROS and robotics.

:microphone: Call for Speakers open
:framed_picture: Call for Posters open
:busts_in_silhouette: Attendee registration available

If you are developing ROS-based projects, conducting robotics research, or building AI-driven and autonomous systems solutions, we invite you to share your work and actively participate in the event.

Find all the information and registration links here:
:link: https://linktr.ee/IEEE_RAS_Colombia

We look forward to having you join us in this edition and to continue strengthening the ROS community in Colombia.

See you in Medellín :robot:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-meetup-medellin-colombia-29-30-apr-2026/52892

ROS Discourse General: MAHE Mobility Challenge 2026 (MIT Bengaluru)

Hello ROS Community,

MAHE Mobility Challenge 2026 is a national-level hybrid hackathon hosted by CEAM and the Department of ECE at Manipal Institute of Technology (MIT), Bengaluru.

This challenge is designed for B.Tech students passionate about autonomous and connected mobility systems, offering an opportunity to ideate, design, and build real working prototypes addressing next-generation mobility challenges.

Total Prize Pool: ₹3 Lakhs


Challenge Tracks

• AI in Mobility
Intelligent perception systems, predictive modeling, adaptive routing, autonomy stacks

• Robotics & Control
Embedded systems, actuator integration, simulation workflows, control architecture design

• Cybersecurity for Mobility
Secure V2X communication, threat modeling, safety-focused system hardening for connected vehicles


Format & Timeline

Shortlisted teams will build and demonstrate working prototypes during the final round.

Participants are encouraged to leverage open-source robotics frameworks (ROS), simulation environments, and modular autonomy architectures where relevant.

We welcome engagement from students and robotics enthusiasts interested in contributing to secure and intelligent mobility systems.

Further details and registration:
https://mahemobility.mitblr.org/

Looking forward to participation and discussion from the ROS community.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/mahe-mobility-challenge-2026-mit-bengaluru/52886

ROS Discourse General: SIPA: Quantifying Physical Integrity and the Sim-to-Real Gap in 7-DoF Trajectories

Introduction:

SIPA (Spatial Intelligence Physical Audit) is a trajectory-level physical consistency diagnostic. It does not require source code access or internal simulator states and directly audits 7-DoF CSV trajectories. By design, SIPA is compatible with any system that produces spatial motion data. Its principle is based on the Non-Associative Residual Hypothesis (NARH).

1. What SIPA Can Audit

SIPA operates on the final motion output, enabling post-hoc physical forensics for:

Supported Data Pathways:

2. The Logic: Non-Associative Residual Hypothesis (NARH)

NARH posits that physical inconsistency stems from discrete solver ordering rather than just algebraic error.

(1)Setting

Consider a rigid-body simulation system defined by:

s_{t+1} = \Psi_{\sigma(k)} \circ \cdots \circ \Psi_{\sigma(1)} (s_t)

where � is an execution order induced by:

Each \Psi_i is individually well-defined, but their composition order may vary.

(2) Order Sensitivity

Although each operator Ψi belongs to an associative algebra (e.g., matrix multiplication, quaternion composition), the composition of numerically approximated operators may satisfy:

(\Psi_a \circ \Psi_b) \circ \Psi_c \neq \Psi_a \circ (\Psi_b \circ \Psi_c)

due to:

Define the discrete associator:

A(a,b,c;s) = \bigl( (\Psi_a \circ \Psi_b) \circ \Psi_c \bigr)(s) - \bigl( \Psi_a \circ (\Psi_b \circ \Psi_c) \bigr)(s)

(3) Definition: Non-Associative Residual

We define the Non-Associative Residual (NAR) at state s_t as:

R_t = \lVert A(a,b,c; s_t) \rVert

for a chosen triple of sub-operators representative of contact or constraint updates.

This residual measures path-dependence induced by discrete solver ordering, not algebraic non-associativity of the state representation.

(4) Hypothesis (NARH)

In high-interaction-density regimes (e.g., contact-rich robotics, high-speed manipulation), the Non-Associative Residual R_t becomes non-negligible relative to scalar stability metrics, and accumulates over time as a structured drift term.

Formally, there exists a regime such that:

\sum_{t=0}^{T} R_t \not\approx 0

even when:

\Vert s_{t+1} - s_t \Vert remains bounded.

(5) Interpretation

This hypothesis does not claim:

Instead, it asserts:

Discrete parallel constraint resolution introduces a measurable order-dependent residual that is not explicitly encoded in the state space.

This residual may contribute to:

(6) Falsifiability

NARH is falsified if:

  1. s_t remains within numerical noise across interaction densities.

  2. Reordering constraint application yields statistically indistinguishable trajectories.

  3. Scalar metrics (e.g., kinetic energy norm, velocity norm) detect instability earlier or equally compared to any associator-derived signal.

(7) Research Implication

If validated, NARH suggests that:

If invalidated, the experiment establishes an empirically order-invariant regime — a valuable boundary characterization of solver behavior.

3. Physical Integrity Rating (PIR)

SIPA introduces the Physical Integrity Rating (PIR), a heuristic composite indicator designed to quantify the causal reliability of motion trajectories. PIR evaluates whether a world model is “physically solvent� or accumulating “kinetic debt.�

The Metric

PIR = Q_{\text{data}} \times (1 - D_{\text{phys}})

:bar_chart: Credit Rating Scale

PIR Score Rating Label Operational Meaning
≥ 0.85 A High Integrity Reliable for industrial simulation and safety-critical AI.
≥ 0.70 B Acceptable Generally consistent; minor numerical drift detected.
≥ 0.50 C Speculative “Visual plausibility maintained, but causal logic is shaky.�
≥ 0.30 D High Risk “Elevated physical debt; prone to ““hallucinations�� under stress.�
< 0.30 F Critical Physical bankruptcy; trajectory violates fundamental causality.

Note on Early Adoption: Since its initialization, we’ve observed a unique anomaly: 120 institutional entities cloned the repo via CLI with near-zero web UI traffic. This suggests that the industry (Sim-to-Real teams and Tech DD leads) is already utilizing NARH for internal audits. View Traffic Evidence

Call to Action

We invite the ROS community to stress-test their simulators and world models using SIPA. Any questions can be discussed under this topic!

GitHub Repository: https://github.com/ZC502/SIPA.git

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/sipa-quantifying-physical-integrity-and-the-sim-to-real-gap-in-7-dof-trajectories/52884

ROS Discourse General: NVIDIA Isaac ROS 4.2 for DGX Spark has arrived

voyager_f3_localize3_2x-ezgif.com-video-to-gif-converter (1)
NVIDIA Isaac ROS 4.2 for DGX Spark is now live.

Here’s what’s new in 4.2:

Check out the full Isaac ROS 4.2 details and share what you build with this release. :rocket:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/nvidia-isaac-ros-4-2-for-dgx-spark-has-arrived/52858

ROS Discourse General: Controlling the Nero Robotic Arm with OpenClaw

Controlling the Nero Robotic Arm with OpenClaw

As a popular open-source project, OpenClaw has become a highlight in the robotic arm control field with its intuitive operation and strong adaptability. It enables full end-to-end linkage between AI commands and device execution, greatly lowering the barrier for robotic arm control. This article focuses on practical implementation. Combined with the pyAgxArm SDK, we will guide you through the download, installation, and configuration of OpenClaw to achieve efficient control of the NERO 7-axis robotic arm.

Seamless AI Control: AgileX NERO 7-DoF Arm with OpenClaw

Download and Install OpenClaw

Start Configuring OpenClaw

Teach OpenClaw the Skill and Rules for Controlling the Robotic Arm

---
name: agx-arm-codegen
description: Guide OpenClaw to generate pyAgxArm-based robotic arm control code from user natural language. When users describe robotic arm movements with prompts and existing scripts cannot directly meet the requirements, automatically organize and generate executable Python scripts based on the APIs and examples provided by this skill.
metadata:
  {
    "openclaw":
      {
        "emoji": "烙",
        "requires": { "bins": ["python3", "pip3"] },
      },
  }
---

## Function Overview

- This skill is used to **guide OpenClaw to generate** executable pyAgxArm control code (Python scripts) based on user natural language descriptions, rather than just calling existing CLIs.
- Reference SDK: pyAgxArm ([GitHub](https://github.com/agilexrobotics/pyAgxArm)); Reference example: `pyAgxArm/demos/nero/test1.py`.

## When to Use This Skill

- Users say "Write code to control the robotic arm", "Generate a control script based on my description", "Make the robotic arm perform multiple actions in sequence", etc.
- Users explicitly request to "generate Python code" or "provide a runnable script" to control AgileX robotic arms such as Nero/Piper.

## Generate Code Using This Skill
   - Based on user prompts, combine the APIs and templates in `references/pyagxarm-api.md` of this skill to generate a complete, runnable Python script.
   - After generation, explain: the script needs to run in an environment with pyAgxArm and python-can installed, and CAN must be activated and the robotic arm powered on; remind users to pay attention to safety (no one in the workspace, small-scale testing first is recommended).

## Rules for Generating Code

1. **Connection and Configuration**
   - Use `create_agx_arm_config(robot="nero", comm="can", channel="can0", interface="socketcan")` to create a configuration (Nero example; Piper can use `robot="piper"`).
   - Use `AgxArmFactory.create_arm(robot_cfg)` to create a robotic arm instance, then `robot.connect()` to establish a connection.
2. **Enabling and Pre-Motion**
   - CRITICAL: The robot MUST BE ENABLED before switching modes. If the robot is in a disabled state, you cannot switch modes.
   - Switch to normal mode before movement, then enable: `robot.set_normal_mode()`, then poll `robot.enable()` until successful; you can set `robot.set_speed_percent(100)`.
   - Motion modes: Whenever using move_* or needing to switch to * mode, explicitly set `robot.set_motion_mode(robot.MOTION_MODE.J)` (Joint), `P` (Point-to-Point), `L` (Linear), `C` (Circular), `JS` (Joint Quick Response, use with caution).
3. **Motion Interfaces and Units**
   - Joint motion: `robot.move_j([j1, j2, ..., j7])`, unit is **radians**, Nero has 7 joints.
   - Cartesian: `robot.move_p(pose)` / `robot.move_l(pose)`, pose is `[x, y, z, roll, pitch, yaw]`, position unit is **meters**, attitude is **radians**.
   - Circular: `robot.move_c(start_pose, mid_pose, end_pose)`, each pose is 6 floating-point numbers.
   - CRITICAL: All movement commands (move_j, move_js, move_mit, move_c, move_l, move_p) must be used in normal mode
   - After motion completion, poll `robot.get_arm_status().msg.motion_status == 0` or encapsulate `wait_motion_done(robot, timeout=...)` before executing the next step.
4. **Mode Switching**
   - Switching modes (master, slave, normal) requires 1s delay before and after the mode switch
   - Use `robot.set_normal_mode()` to set normal mode
   - Use `robot.set_master_mode()` to set master mode
   - Use `robot.set_slave_mode()` to set slave mode
   - CRITICAL: Enable the robot FIRST with `robot.enable()` BEFORE switching modes
5. **Safety and Conclusion**
   - In the generated script, note: confirm workspace safety before execution; small-scale movement is recommended for the first time; use physical emergency stop or `robot.electronic_emergency_stop()` / `robot.disable()` in case of emergency.
   - If the user requests "disable after completion", call `robot.disable()` at the end of the script.
6. **Implementation Details**
   - When waiting for motion to complete, use shorter timeout (2-3 seconds)
   - After each mechanical arm operation, add a small sleep (0.01 seconds)
   - Motion completion detection: `robot.get_arm_status().msg.motion_status == 0` (not == 1)

## Reference Files

- **API and Minimal Runnable Template**: `references/pyagxarm-api.md`  
  When generating code, refer to the interfaces and code snippets in this file to ensure consistency with pyAgxArm and test1.py usage.

## Safety Notes

- The generated code will drive a physical robotic arm. Users must be reminded: confirm no personnel or obstacles in the workspace before execution; it is recommended to test with small movements and low speeds first.
- High-risk modes (such as `move_js`, `move_mit`) should be marked with risks in code comments or user explanations, and it is recommended to use them only after understanding the consequences.
- This skill is only responsible for "guiding code generation" and does not directly execute movements; users need to prepare the actual running environment, CAN activation, and pyAgxArm installation by themselves (refer to environment preparation in the agx-arm skill).
# pyAgxArm API Quick Reference & Minimal Runnable Template

For reference when OpenClaw generates robotic arm control code from user natural language. SDK source: pyAgxArm ([GitHub](https://github.com/agilexrobotics/pyAgxArm)); Example reference: `pyAgxArm/demos/nero/test1.py`.

## 1. Connection and Configuration

```python
from pyAgxArm import create_agx_arm_config, AgxArmFactory

# Configuration: robot options - nero / piper / piper_h / piper_l / piper_x; channel e.g. can0
robot_cfg = create_agx_arm_config(
    robot="nero",
    comm="can",
    channel="can0",
    interface="socketcan",
)
robot = AgxArmFactory.create_arm(robot_cfg)
robot.connect()

2. Enabling and Modes

robot.set_normal_mode()   # Normal mode (single arm control)
# Enable: poll until successful
while not robot.enable():
    time.sleep(0.01)

robot.set_speed_percent(100)   # Motion speed percentage 0–100
# Disable
while not robot.disable():
    time.sleep(0.01)

3. Motion Modes and Interfaces

Mode Constant Interface Description
Joint Position Speed robot.MOTION_MODE.J robot.move_j([j1..j7]) 7 joint angles (radians), with smoothing
Joint Quick Response robot.MOTION_MODE.JS robot.move_js([j1..j7]) No smoothing, use with caution
Point-to-Point robot.MOTION_MODE.P robot.move_p([x,y,z,roll,pitch,yaw]) Cartesian pose, meters/radians
Linear robot.MOTION_MODE.L robot.move_l([x,y,z,roll,pitch,yaw]) Linear trajectory
Circular robot.MOTION_MODE.C robot.move_c(start_pose, mid_pose, end_pose) Each pose is 6 floating-point numbers

Example (Joint Motion + Wait for Completion):

import time

def wait_motion_done(robot, timeout: float = 3.0, poll_interval: float = 0.1) -> bool:  # Shorter timeout (2-3s)
    time.sleep(0.5)
    start_t = time.monotonic()
    while True:
        status = robot.get_arm_status()
        if status is not None and getattr(status.msg, "motion_status", None) == 0:
            return True
        if time.monotonic() - start_t > timeout:
            return False
        time.sleep(poll_interval)

robot.set_motion_mode(robot.MOTION_MODE.J)
robot.move_j([0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
wait_motion_done(robot, timeout=3.0)  # Shorter timeout

4. Read Status

5. Others

6. Minimal Runnable Template (Extend based on this when generating code)

#!/usr/bin/env python3
import time
from pyAgxArm import create_agx_arm_config, AgxArmFactory


def wait_motion_done(robot, timeout: float = 3.0, poll_interval: float = 0.1) -> bool:  # Shorter timeout (2-3s)
    time.sleep(0.5)
    start_t = time.monotonic()
    while True:
        status = robot.get_arm_status()
        if status is not None and getattr(status.msg, "motion_status", None) == 0:
            return True
        if time.monotonic() - start_t > timeout:
            return False
        time.sleep(poll_interval)


def main():
    robot_cfg = create_agx_arm_config(
        robot="nero",
        comm="can",
        channel="can0",
        interface="socketcan",
    )
    robot = AgxArmFactory.create_arm(robot_cfg)
    robot.connect()

    # Mode switching requires 1s delay before and after
    time.sleep(1)  # 1s delay before mode switch
    robot.set_normal_mode()
    time.sleep(1)  # 1s delay after mode switch
    
    # CRITICAL: The robot MUST BE ENABLED before switching modes
    while not robot.enable():
        time.sleep(0.01)
    robot.set_speed_percent(80)

    # After each mechanical arm operation, add a small sleep (0.01 seconds)
    # CRITICAL: All movement commands must be used in normal mode
    robot.set_motion_mode(robot.MOTION_MODE.J)
    robot.move_j([0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
    time.sleep(0.01)  # Small delay after move command
    wait_motion_done(robot, timeout=3.0)  # Shorter timeout

    # Optional: Disable before exit
    # while not robot.disable():
    #     time.sleep(0.01)


if __name__ == "__main__":
    main()

When generating code, replace or add motion steps (move_j / move_p / move_l / move_c, etc.) according to user descriptions, and keep consistency in connection, enabling, wait_motion_done and units (radians/meters).

After configuring the robotic arm CAN communication and Python environment, OpenClaw can automatically call the SDK driver to generate control code and control the robotic arm


2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/controlling-the-nero-robotic-arm-with-openclaw/52825

ROS Discourse General: LinkForge v1.3.0 — The Linter & Bridge for Robotics

Hi everyone! :waving_hand:

LinkForge v1.3.0 was just released! But more than a release announcement, I want to take a moment to share the bigger vision of where this project is going — because it’s grown far beyond a Blender plugin.


:telescope: The Vision

LinkForge is not just a URDF exporter for Blender. The architecture is intentionally built as a Hexagonal Core, fully decoupled from any single 3D host or output format.

The mission is simple:

Bridge the gap between creative 3D design and high-fidelity robotics engineering.

Design Systems (Blender, FreeCAD, Fusion 360) ➜ LinkForge CoreSimulation & Production (ROS 2, MuJoCo, Gazebo, Isaac Sim)

Because in robotics, Physics is Truth. Every inertia tensor, every joint limit, every sensor placement should be mathematically correct before it ever reaches a simulator.


:rocket: What’s New in v1.3.0?

Full release notes on GitHub

:high_voltage: Performance

:control_knobs: ros2_control Intelligence

:bug: Key Bug Fixes

:clipboard: Component Search


:handshake: Looking for Contributors

The upcoming roadmap includes SRDF Support, the linkforge_ros package, and a Composer API for modular robot assemblies.

If you work in ROS 2, enjoy Python or Rust, or have ideas on how to improve the URDF/XACRO workflow, come say hi:


What is your biggest URDF/XACRO pain point today? I’d love to know what the community needs most as we plan the next milestone! :robot::sparkles:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/linkforge-v1-3-0-the-linter-bridge-for-robotics/52822

ROS Discourse General: Tom Ryden talks robotics trends at Boston Robot Hackers on March 5!

We are excited to share news about the next monthly meeting of the Boston Robot Hackers! Check it out If you are in the Boston area and please register for the event.

Where: Artisans Asylum, 96 Holton Street, Boston, MA 02135
When: Thursday March 5, 7:00-9:00pm
Speaker: Tom Ryden of Mass Robotics

If you are into Robotics (and, by definition, you are, given you are reading this!) this promises to be a very interesting talk! Tom will start with an overview of MassRobotics and then get into what the current trends are in the robotics market: what problems are start-ups addressing, how the fundraising market is today, and where the investment dollars are going.

And if you cannot make it, still consider joining our organization!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/tom-ryden-talks-robotics-trends-at-boston-robot-hackers-on-march-5/52810

ROS Discourse General: Intrinsic joining Google as a a distinct robotics and AI unit

Hi Everyone,

As you may have seen in the recent blog post, Intrinsic is joining Google as a distinct robotics and AI unit. Specifically Intrinsic’s platform will bring a new “infrastructural bridge” between Google’s frontier AI research (such as the AI coming from teams at Gemini and DeepMind) and the practical, high-stakes requirements of industrial manufacturing, which is Intrinsic’s focus. This decision will allow our team to continue building the Intrinsic platform, and operate in a very similar way to before. Our commercial mandate remains the same, as does our focus on delivering intelligent solutions for our customers.

Intrinsic remains dedicated to the commitments we’ve made to the open source community, to ROS, Gazebo and Open-RMF (including Lyrical and Kura release roadmaps) and deepening our platform integrations with ROS over time. We’re also very excited about the AI for Industry Challenge this year, which is organized with the team at Open Robotics and has thousands of registrants so far.

From the community’s perspective we are expecting minimal disruption, if any, and we look forward to showing and sharing more news at ROSCon in Toronto later this year.

4 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/intrinsic-joining-google-as-a-a-distinct-robotics-and-ai-unit/52712

ROS Discourse General: ROS Lyrical Release Working Group

The ROS Lyrical Release Working Group will have its first meeting Fri, Feb 27, 2026 7:00 PM UTC→Fri, Feb 27, 2026 8:00 PM UTC.

Want to come? Give feedback on the time here: Meeting time: ROS Lyrical Release WG

Meeting link: https://openrobotics-org.zoom.us/meetings/81224698184/invitations?signature=SrUjwX951phQQqx25bNfAA-MFEpABNsKY1vAAWfp91s

Notes and Agenda: https://docs.google.com/document/d/1lkilmVulAUF1qVRsmimMa1cJtO2jOLoGy75itOCwc78/edit?usp=sharing

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-lyrical-release-working-group/52691

ROS Discourse General: Meeting Summary for Accelerated Transport Working Group 02/18/2026

In this meeting we discussed

Remember that the meeting is happening every week to push this feature in to Lyrical Luth. Please check the Open Source Robotics Foundation official events to join the next meeting

Join the #Accelerated Memory Transport Working Group to discuss more details on Zulip

Meeting notes

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/meeting-summary-for-accelerated-transport-working-group-02-18-2026/52672

ROS Discourse General: Meeting Summary for Accelerated Transport Working Group 02/11/2026

We had the first meeting in the Accelerated Memory Transport WG. The meeting focused on discussing a new prototype presented by Karsten and CY from NVIDIA.

We discussed some topics:

Meeting notes

Please check the Open Source Robotics Foundation official events to join the next meeting

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/meeting-summary-for-accelerated-transport-working-group-02-11-2026/52671

ROS Discourse General: Transitive Core Concepts — #1: Full-stack Packages: robot + cloud + web

Working with nine different robotics companies over the course of 10 years has taught us a thing or two about designing robotic full-stack architectures. All this experience went into the design of Transitive, the open-source framework for full-stack robotics. We’ve started a new mini-series of blog posts where I dive into the three core concepts of the framework. Too often do we see robotic startups fall into the same pitfalls when designing their full-stack architecture (robot + cloud + web). Therefore it is important to us to share our experience and explain why we built Transitive the way we did.

In this first post you’ll learn about the need for cross-device code encapsulation, how we addressed this need in Transitive via full-stack packages, and what benefits result from this approach for growing your fleet and functionality without increasing complexity.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/transitive-core-concepts-1-full-stack-packages-robot-cloud-web/52651

ROS Discourse General: Ros2_canopen shortcomings and improvements

During the integration of our hardware we (inmach.de) encountered some shortcomings in the ros2_canopen package which we worked around or fixed in our fork of ros2_canopen. We’d like to get these changes into the upstream repo so that everyone can profit from them.

The major shortcomings we found and think should and could be improved are:

With this post I’d like to start a discussion with the ROS community and the maintainers (@c_h_s, @ipa-vsp) of ros2_canopen about other possible shortcomings and what needs and can be done to improve the ros2_canopen stack. So that we together can make it even better in the years to come.

5 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-canopen-shortcomings-and-improvements/52647

ROS Discourse General: Predictability of zero-copy message transport

Hey! I’m looking to improve my ROS 2 code performance with the usage of zero-copy transfer for large messages.

I’ve been under the impression that simply composing any composable node into a container, and setting “use_intra_process_comms” to True would lead into zero-copy transfer. But after experimenting and going through multiple tutorials, design docs, and discussions, that doesn’t seem to be the case.

I wanted to create this thread to write down some of my questions, in the hopes of them being helpful for improving the documentation, and to get a better understanding of the zero-copy edge cases. I’m also curious to hear if there are already ways to easily verify that the zero-copy transfer is happening.


To my understanding, it looks like there are a bunch of different things that can have an influence if zero-copy happens or not:

I’m looking to understand what are the cases when the zero-copy transfer really happens, and in which cases ROS just quietly falls back to copying the messages.

Many of these questions also boil down a bigger question: How can I verify if zero-copy happens, and what kind of performance benefits I’m getting from using it? All the demos I’ve seen until now simply print the memory address of the message to confirm that the zero-copy happens. I think it would be highly beneficial to have a better way directly in ROS 2 to see if zero-copy pub-sub is actually happening. Is there already a way to do that, or do you see how this could be implemented? Maybe through ros2 topic CLI?

In addition to the above questions, the tutorials and other resources still left me wondering about these ones:

[1] ROS Jazzy Tutorial - Intra-Process-Communication
[2] Discourse Thread - Performance Characteristics: subscription callback signatures, RMW implementation, Intra-process communication (IPC)
[3] ROS 2 Design Article - Intraprocess communications
[4] ROS Jazzy Tutorial - Configure Zero Copy Loaned Messages

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/predictability-of-zero-copy-message-transport/52646


2026-03-14 12:18