Planet ROS
Planet ROS - http://planet.ros.org
Planet ROS - http://planet.ros.org http://planet.ros.org
ROS Discourse General: Can we build MicroROS on ESP32 with Zephyr RTOS ?
Came across a git: GitHub - micro-ROS/micro_ros_setup: Support macros for building micro-ROS-based firmware.
In the table under configuring microROS module, its stated USB and UART support is not yet done for ESP32. So I was wondering if I could build MicroROS on ESP32 via Zeohyr RTOS.
When I was learning Zephyr RTOS I has build and flashed the MCUs. But maybe the MicroROS setup does not support it yet ?
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Videos from ROSCon UK 2025 in Edinburgh 🇬🇧
Hi Everyone,
The entire program from our inaugural ROSCon UK in Edinburgh is now available ad free
on the OSRF Vimeo account. You can find the full conference website here.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: DroidCam in ROS2
Hi to everyone! I‘ve recently published ROS2 package for DroidCam to ease usage of your Andoid/iPhone as a webcamera in ROS2.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS2 URDF language reference?
The ROS1 wiki includes a complete reference for the URDF language. The ROS2 documentation contains a series of URDF tutorials, but as far as I can see no equivalent language reference. Is the ROS1 wiki still the authoritative reference for URDF? If not, where can I find the latest reference?
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Simple composable and lifecycle node creation - Turtle Nest 1.2.0 update
When developing with ROS 2, I often have to create new nodes that are composable or lifecycle nodes. Setting them up from scratch can be surprisingly tedious, which is why I added a feature to Turtle Nest that allows you to create these nodes with a single click.
Even the CMakeLists.txt
and other setup files are automatically updated, so you can run the template node immediately after creating it.
Lifecycle and composable nodes are available in Turtle Nest since the newest 1.2.0 release, which is now available for all the active ROS 2 distributions via apt installation. Since the last announcement here in Discourse, it’s possible now to to also create Custom Message Interfaces package.
Hope you find these features as useful as they’ve been for my day-to-day development!
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Update RMW Zenoh-Pico for Zenoh 1.4.0
At ROSConJP 2025 on 9/9, eSOL demonstrated the robot operation by applying micro-ROS to Zenoh-Pico.
Fortunately, @Yadunund gave an excellent presentation on integrating ROS and Zenoh-Pico, and I think many Japanese developers learned about Zenoh-Pico.
Now that the team has a decent working experience, eSOL would like to announce the update of the software we showed at ROSConJP 2025.
- RMW Zenoh-Pico: GitHub - esol-community/rmw_zenoh_pico: Zenoh-pico implementation for micro-ROS
- Latest tag: 1.4.0
- micro-ROS Setup: GitHub - esol-community/micro_ros_setup at rmw_zenoh_pico
- Supported branch: rmw_zenoh_pico
- micro-ROS PlatformIO: GitHub - esol-community/micro_ros_platformio at feature/rmw_zenoh_pico
- Supported branch: feature/rmw_zenoh_pico
- Example app: micro_ros_platformio/examples/micro-ros_zenoh_pico_publisher at feature/rmw_zenoh_pico · esol-community/micro_ros_platformio · GitHub
This update is an enhancement to the previously posted version of the following topic.
Major updates include:
- Support Zenoh and Zenoh-Pico version 1.4.0
- Support for several M5Stack (ESP32) dev kits in PlatformIO environments
- Additional patches for several Zenoh-Pico
- Micro-ROS only without ROS and Zenohd
- Confirmed that M5Stack can communicate with both Unicast and Multicast
Here’s a video at the end.
We haven’t been able to measure precisely, but it is able to send ROS messages over the ESP32 Wi-Fi at around 20msec intervals.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: [Announcement] Safe DDS 3.0 is ISO 26262 ASIL D certified — ROS 2 tutorial + field deployment
Safe DDS 3.0 is now ISO 26262 ASIL D certified (renewal after 2.0). It’s compatible with ROS 2. We’re sharing a hands-on tutorial and pointing to a field-deployed device using Safe DDS.
-
ROS 2 tutorial: https://safe-dds.docs.eprosima.com/main/intro/tutorial_ros2.html
-
Field deployment (RealSense D555 PoE): https://realsenseai.com/ruggedized-industrial-stereo-depth/d555-poe/?q=%2Fruggedized-industrial-stereo-depth%2Fd555-poe%2F&
Why this might help ROS 2 teams
Many projects need deterministic communications and safety certification evidence on the path to production. Our goal with Safe DDS is to provide a certified DDS option that integrates with existing ROS 2 workflows while supporting real-world operational needs (TSN, redundancy, memory control, etc.).
Certification cadence: Safe DDS has maintained ASIL-D certification across major releases (2.0 → 3.0). For teams planning multi-year products, the ability to renew certification as versions evolve can simplify compliance roadmaps.
What’s new in Safe DDS 3.0 (highlights)
-
@optional
&@external
type support — optional members; external basic types; sequences/arrays of basic types; and strings. -
Custom memory allocators — integrate your own allocators for tighter control.
-
Channel redundancy — listen on multiple channels simultaneously for fault tolerance.
-
Manual entity decommissioning — finer control over DDS entity lifecycle.
-
TSN compatibility for UDPv4 transport — operate the ASIL-D–certified UDPv4 transport within TSN setups.
-
Ethernet transport — native IEEE 802.1Q (TSN-compatible).
-
Docs & tutorials — expanded resources (ROS 2 integration, RTEMS getting-started, board packages for NXP, STMicroelectronics, Espressif, …).
Using Safe DDS with ROS 2
The tutorial below walks through the integration model and configuration patterns with ROS 2:
Tutorial: https://safe-dds.docs.eprosima.com/main/intro/tutorial_ros2.html
For those evaluating real deployments, here’s a previously released ruggedized depth camera using Safe DDS:
Field deployment (RealSense D555 PoE):
https://realsenseai.com/ruggedized-industrial-stereo-depth/d555-poe/?q=%2Fruggedized-industrial-stereo-depth%2Fd555-poe%2F&
Open to questions
Happy to discuss ROS 2 integration details (QoS, discovery, transports), TSN/802.1Q topologies, determinism/memory considerations, and migration paths (prototype on Fast DDS → production with Safe DDS).
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Rosbag2 composable record - splitting files
Hi
I have been using the rosbag2 to record topics as a composable node for a while now. Does anyone here know how I could make use of splitting the recording into several files during the recording process using the max_file_size parameter? Is this even possible in the composable node method?
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: What’s the #1 bottleneck in your robotics dev workflow? (help us prioritize SnappyTool)
Hi everyone,
I’ve been consulting in robotics on and off, and one pattern keeps coming up: our development tools are still too painful.
-
Setting up projects can take days, often requiring advanced expertise just to get an environment working.
-
Teams often say, “Framework X didn’t work for us, so we built our own library.” That may solve a narrow problem, but it slows down progress for the field as a whole.
We think there must be a better way.
That’s why we’re building SnappyTool a browser-based drag-and-drop robotics design platform where you can:
-
Assemble robots visually
-
Auto-generate URDF / ROS code
-
Share designs and even buy/sell robot parts via a marketplace
-
Use it freely with a generous freemium model (not gatekeeping innovation!!)
The ask:
What’s the #1 bottleneck in your robotics workflow that, if solved, would significantly improve your productivity (enough that you or your team would pay for it)?
Examples could be:
-
Simulation setup
-
CAD → URDF conversion
-
Version control for robot models
-
Sourcing compatible hardware parts
-
Deployment and integration
We’ve have a little runway and assembled a small team to work full-time on this. We’d like to make sure we are solving real pains first, not imaginary ones.
Any input would be very much appreciated thank you!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Industrial: New Tools for Robotics: RQT Frame Editor and the pitasc Framework
As robotics continues to expand into industrial and collaborative environments, researchers and developers are working on tools that make robots easier to configure, teach, and reconfigure for real-world tasks. In a recent talk, Daniel Bargmann (Fraunhofer IPA) introduced two powerful software solutions designed for exactly this purpose: the RQT Frame Editor and the pitasc Framework.
RQT Frame Editor – Simplifying TF-Frame Management
The RQT Frame Editor is a ROS plugin that makes working with TF-frames more intuitive. Instead of editing configuration files manually, users can visually create, arrange, and adjust frames within the familiar RQT and RViz environments.
Key features include:
Interactive frame manipulation – Move, rotate, or manually set values for frames.
Copy and reuse poses – Copy positions or orientations from existing frames.
Mesh visualization – Attach meshes (including custom STL files) to frames and view them in RViz.
Frame grouping and pinning – Organize frames by groups or “pin” active frames for efficient workflow.
ROS service integration – Use frame editor functionality programmatically in your own applications.
These capabilities are especially valuable for developers working on multi-robot setups, simulation environments, or applications that require frequent TF-frame adjustments.
Documentation and source code are available on GitHub
pitasc – A Skill-Based Framework for Force-Controlled Robotics
The second tool highlighted in the presentation is pitasc, a robot control framework designed for force-controlled assembly and disassembly tasks. Unlike traditional, vendor-specific robot programming approaches, pitasc uses a skill-based programming model.
In practice, this means developers do not write low-level motion code directly. Instead, they arrange and parameterize skills—reusable building blocks that range from simple movements (e.g., LIN or PTP) to advanced behaviors that combine position and force control across different dimensions.
Real-World Applications
pitasc has already been deployed across a wide variety of industrial use cases, including:
Assembly of plastic components
Riveting, screwing, and clipping tasks
Flexible robot cells with rapid reconfiguration
Dual-arm coordination, such as automated wiring of electrical cabinets
This flexibility allows pitasc to support both collaborative robots and industrial robots, bridging the gap between research and production environments.
Documentation and source code available are available here.
pitasc at a glance
Live demo of rqt frame editor and pitasc
Watch the full talk by Daniel Bargmann on YouTube to see live demos of both the RQT Frame Editor and pitasc in action, including real-world examples of assembly and disassembly tasks
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: AMP With Carter Schultz | Cloud Robotics WG Meeting 2025-10-08
The CRWG is pleased to welcome Carter Schultz of AMP to our coming meeting at Wed, Oct 8, 2025 4:00 PM UTC→Wed, Oct 8, 2025 5:00 PM UTC. AMP is working to modernise global recycling infrastructure with AI‑driven robotics. Carter will share the company’s vision and, in particular, the key challenges it faces when operating a large fleet of autonomous robots.
Please note that the meeting day has changed for the CRWG. Previous meetings were on Monday; they are now on Wednesday at the same time.
Last meeting, guest speakers Lei Fu and Sahar Slimpour, from the Zurich University of Applied Sciences and University of Turku respectively, joined the CRWG to talk about their ROSBag MCP Server research (also shared in ROS Discourse). If you’re interested to watch the meeting, it is available on YouTube.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: 【PIKA】Method for Teleoperating Any Robotic Arm via Pika
Hi everyone,
I’d like to share a universal method for teleoperating robotic arms using Pika Sense. This approach works with any ROS-enabled robotic arm (we’ve tested it with Piper, xArm, and UR robots) by leveraging high-precision 6D pose tracking (0.3mm accuracy) and incremental control algorithms. The system publishes standard geometry_msgs/PoseStamped
messages on the pika/pose
topic, making integration straightforward. Hope this helps anyone looking to implement teleoperation across different robot platforms!
Teleoperation
Teleoperation of robotic arms is achieved using Pika Sense. When used with external positioning base stations, Pika Sense can acquire 6D pose data with an accuracy of 0.3mm. After aligning the coordinate system of Pika Sense with that of the robotic arm’s end effector, incremental control is employed to map the 6D pose data to the end effector of the robotic arm, thereby realizing teleoperation.
In summary, the teleoperation principle consists of four key steps:
- Acquire 6D pose data
- Align coordinate systems
- Implement incremental control
- Map 6D pose data to the robotic arm
Below is a detailed breakdown and explanation of each step.
Acquiring 6D Pose Data
Positioning Principle of Pika Sense and Station
1. Positioning Mechanism of Base Stations
- Each base station is equipped with an infrared LED array and two rotating laser transmitters (responsible for horizontal and vertical scanning, respectively):
- The infrared LEDs flash globally at a frequency of 60Hz, providing synchronization signals for the entire space.
- The laser transmitters, driven by motors to rotate, emit horizontal and vertical laser beams alternately, scanning the space in a cycle of 10ms (resulting in a complete cycle of 20ms).
- A single base station can achieve a laser scanning coverage of 5×5 meters; with four base stations working collaboratively, the coverage can be expanded to 10×10 meters.
2. Positioning Implementation of Pika Sense
- The upper sensor of Pika Sense is called the Tracker, which is densely equipped with more than 70 photosensors on its surface. Each sensor can receive infrared signals and laser scans.
- Positioning calculation process:
- The sensors record the time of arrival of the laser, and combined with the base station’s scanning cycle, calculate the horizontal and pitch angles of the sensors relative to the base station.
- Through the spatial distribution and time difference data of multiple sensors (≥5), the precise position and orientation of the Tracker are solved.
- Calculations are completed directly by the local processor without the need for image processing, resulting in a delay of only 20ms and a positioning accuracy of 0.3mm.
The 6D pose data is published as messages of the geometry_msgs/PoseStamped
type to the pika/pose
topic, which is compatible with end pose control of most robotic arms available on the market.
In addition to the ROS message type, if you need to access 6D pose data independent of ROS, please refer to our pika_sdk
.
Coordinate System Alignment
In the first step [Acquiring 6D Pose Data], whether the 6D pose data is obtained by subscribing to the ROS topic or via the Pika SDK, the coordinate system of Pika Sense is centered at the gripper, with the x-axis facing forward, the y-axis facing left, and the z-axis facing upward, as shown in the figure below:
Different robotic arms have different coordinate systems for their end effectors. However, for most of them, the z-axis faces forward, while the orientations of the x-axis and y-axis depend on the initial rotation values of the robotic arm’s end effector. The method for checking the coordinate system of a robotic arm’s end effector varies by model; typically, it can be viewed through the host-software provided by the manufacturer or by loading the robotic arm model in ROS RViz.
After understanding the coordinate systems of both Pika Sense and the robotic arm’s end effector, the 6D pose data of Pika Sense is converted into a homogeneous transformation matrix. This matrix is then multiplied by an adjustment matrix to align the Pika Sense coordinate system with the robotic arm’s end effector coordinate system. This completes the coordinate system alignment process.
Incremental Control
In the second step [Coordinate System Alignment], we align the coordinate system of Pika Sense with that of the robotic arm’s end effector (with the z-axis facing forward). However, a question arises: when holding Pika Sense and moving it forward, will the value of its z-axis necessarily increase positively?
Not necessarily. The pose value is related to its base_link
. If the z-axis of base_link
is exactly consistent with the z-axis direction of Pika Sense, then the z-axis value of Pika Sense will indeed increase positively. However, the base_link
of Pika Sense is a coordinate system generated when Pika Sense is calibrated with the base station, where the x-axis faces forward, the y-axis faces left, and the z-axis faces upward. In other words, base_link
is generated randomly.
So, how do we map the coordinates of Pika Sense to the robotic arm’s end effector? How can we ensure that when Pika Sense moves forward/left, the robotic arm’s end effector also moves forward/left accordingly?
The answer is: use incremental control.
In teleoperation, the pose provided by Pika Sense is an absolute pose. However, we do not want the robotic arm to jump directly to this absolute pose. Instead, we want the robotic arm to follow the relative movement of the operator, starting from its current position. Simply put, it involves converting the absolute pose change of the operating device (Pika Sense) into a relative pose command that the robotic arm needs to execute.
The core code for this functionality is as follows:
# 增量式控制
def calc_pose_incre(self,base_pose, pose_data):
begin_matrix = tools.xyzrpy2Mat(base_pose[0], base_pose[1], base_pose[2],
base_pose[3], base_pose[4], base_pose[5])
zero_matrix = tools.xyzrpy2Mat(self.initial_pose_rpy[0],self.initial_pose_rpy[1],self.initial_pose_rpy[2],
self.initial_pose_rpy[3],self.initial_pose_rpy[4],self.initial_pose_rpy[5])
end_matrix = tools.xyzrpy2Mat(pose_data[0], pose_data[1], pose_data[2],
pose_data[3], pose_data[4], pose_data[5])
result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
xyzrpy = tools.mat2xyzrpy(result_matrix)
return xyzrpy
This function implements incremental control using the arithmetic rules of transformation matrices. Let’s break down the code step by step:
Input Parameters
base_pose
: This is the reference pose at the start of teleoperation. When teleoperation is triggered, the system records the pose of the operating device (Pika Sense) at that moment and stores it asself.base_pose
. This pose serves as the “starting point” or “reference zero” for calculating all subsequent increments.pose_data
: This is the real-time pose data of the operating device (Pika Sense) received at the current moment.
Matrix Conversion
The function first converts three key poses (expressed in the format [x, y, z, roll, pitch, yaw]) into 4×4 homogeneous transformation matrices. This conversion is typically performed by the tools.xyzrpy2Mat
function.
begin_matrix
: Converted frombase_pose
, it represents the pose matrix of the operating device at the start of teleoperation. We denote it as T_{begin}.zero_matrix
: Converted fromself.initial_pose_rpy
, it represents the pose matrix of the robotic arm’s end effector at the start of teleoperation. This is the “starting point” for the robotic arm’s movement. We denote it as T_{zero}.end_matrix
: Converted frompose_data
, it represents the pose matrix of the operating device at the current moment. We denote it as T_{end}.
Core Calculation
This is the most critical line of code:
result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
We analyze it using matrix multiplication:
The formula can be expressed as: Result = T_{zero} \times (T_{begin})^{-1} \times T_{end}
np.linalg.inv(begin_matrix)
: Calculates the inverse matrix ofbegin_matrix
, i.e., (T_{begin})^{-1}. In robotics, the inverse matrix of a transformation matrix represents its reverse transformation.np.dot(np.linalg.inv(begin_matrix), end_matrix)
: This step calculates (T_{begin})^{-1} \times T_{end}. The physical meaning of this operation is the transformation required to switch from thebegin
coordinate system to theend
coordinate system. In other words, it accurately describes the relative pose change (increment) of the operating device from the start of teleoperation to the current moment. We refer to this increment as \Delta T.np.dot(zero_matrix, ...)
: This step calculates T_{zero} \times \Delta T. Its physical meaning is applying the relative pose change (\Delta T) just calculated to the initial pose (T_{zero}) of the robotic arm’s end effector.
Result Conversion and Return
xyzrpy = tools.mat2xyzrpy(result_matrix)
: Converts the calculated 4×4 target pose matrixresult_matrix
back to the [x, y, z, roll, pitch, yaw] format that the robotic arm controller can understand.return xyzrpy
: Returns the calculated target pose.
Mapping 6D Pose Data to the Robotic Arm
Through incremental control, we obtain the relative pose commands that the robotic arm needs to execute. However, the control commands vary among different robotic arms. This requires writing different control interfaces for each type of robotic arm. For example:
- Robotic arms such as Piper and Xarm can directly accept commands in the form of
xyzrpy
orxyz + quaternion
for control. The only difference is that Piper uses therostopic
method for message publishing, while Xarm uses therosservice
method for request sending. - UR robotic arms use the
xyz
and rotation vector format for command delivery.
In summary, to send the 6D pose data calculated via incremental control to the robotic arm for control, the final step is to adapt to the robotic arm’s control interface.
Summary
This article elaborates on the core technical principles of realizing robotic arm teleoperation based on Pika Sense. The entire process can be summarized into four key steps:
-
Acquire 6D pose data: First, a system composed of Pika Sense and external positioning base stations is used to accurately capture the operator’s hand movements. The base stations scan the space using infrared synchronization signals and rotating lasers. The photosensors on Pika Sense receive these signals, real-time solve its high-precision six-degree-of-freedom (6D) pose (position and orientation), and then publish this data via ROS topics or SDK.
-
Align coordinate systems: Since the coordinate system definitions of Pika Sense and the end effectors of different robotic arms are inconsistent, alignment is essential. By obtaining the respective coordinate system definitions of Pika Sense and the target robotic arm, a transformation matrix is calculated to convert the pose data of Pika Sense into the coordinate system matching the robotic arm’s end effector, ensuring the intuitiveness of subsequent control.
-
Implement incremental control: To enable the robotic arm to smoothly follow the operator’s relative movement (rather than jumping abruptly to an absolute position), an incremental control strategy is adopted. This method takes the hand pose and robotic arm pose at the start of teleoperation as references, uses matrix operations to real-time calculate the relative pose change (increment) of the hand from the “starting point” to the “current point”, and then applies this increment to the initial pose of the robotic arm to obtain the current target pose of the robotic arm.
-
Map to the robotic arm: The final step is to send the calculated target pose commands to the robotic arm for execution. Since robotic arms of different brands and models (e.g., Piper, xArm, UR) have distinct control interfaces and communication protocols (e.g., ROS topic, ROS service, specific format commands), corresponding adaptation code needs to be written to format the standard 6D pose data into commands that the specific robotic arm can recognize and execute, ultimately achieving precise teleoperation control.
That’s it—four steps to teleoperate any robotic arm with Pika! The magic is in the incremental control: your hand moves 5cm forward, the robot moves 5cm forward. Simple math, smooth motion. We’ve tested this on Piper, xArm, and UR arms, and the same approach should work for your robot too. Questions? Want to share your teleoperation adventures? Drop a comment below!
Cheers!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS2 "State of the Events Executors" - Benchmark comparison between rclcpp::experimental::EventsExecutor and cm_executors::EventsCBGExecutor
As part of the upcoming ROS2 Lyrical Luth release, the client library working group has been planning to mainstream an EventsExecutor implementation as the new default executor in rclcpp. The current experimental implementation is limited by its inability to properly handle simulation time, unlike the EventsCBGExecutor implemented by @JM_ROS over at Cellumation, which can properly handle sim time as well as offering a multithreaded mode. As a first step towards mainstreaming an EventsExecutor implementation, we ran an extensive set of benchmarks built on top of iRobot’s ros2-performance framework (Keep an eye out, as we are hoping to eventually open-source the full benchmark test suite!)
This post will serve as a deep dive into the performance characteristics of the two executors as well as a jumping off point for discussing the overall state of executors (and middleware implementations) in ROS2. (This is a cleaned-up rewrite of a github gist that I originally put all the benchmark info into)
Some notes about the benchmarks:
- Benchmark environment:
upstream ROS rolling docker container running on an x86 developer laptop under minimal load
rclcpp rolling:b14af74a4c9b8683e72b15d61d0ed9121d883973
cm_executors:783a5e329ee8b04abfa3b3397532e979576a2b1f
ros2_performance:4528f43410922379b8da501630d9d938046e48e8
- This suite of benchmarks was run at least 3 times per implementation, to ensure consistent results. For brevity’s sake, we’ll stick to one graph each for this analysis, but the full set of results will be made available elsewhere.
- ipc_on = running with intra-process mode
- In the latency tests, max latency signifies the highest single latency measurement taken for that message size. We didn’t do any outlier filtering on this dataset (aside from the high latencies from the first few seconds), so this value is known to have more consistent variation.
- In the process of producing these benchmarks, we discovered a bug with the generation of clients/services single and multi process CPU usage. Graphs were generated for each, but the underlying data represents just the single process benchmark so we’ll only cover single process clients/services CPU usage.
- There were a few tests we couldn’t run with the EventsCBGExecutor because of freezes or crashes, and so those tests were also omitted for the upstream EventsExecutor.
- For a more 1:1 comparison between the two executors, the EventsCBGExecutor was fixed to use just one thread.
Takeaways, tl;dr:
-
Despite some initial concerns about marginally higher CPU usage for the EventsCBGExecutor compared to the experimental EventsExecutor, there doesn’t appear to be too much of a difference across all the characteristics we tested, with the following exceptions
- EventsExecutor performed slightly better on the long running pub/sub CPU usage test.
- EventsCBGExecutor performed slightly better on the long running actions CPU usage test.
-
Both executors demonstrate memory leaks in the longer running pub / sub and actions tests. After further investigation, the SingleThreadedExecutor and MultiThreadedExecutor also show a climb in memory for pub/sub, while actions remain stable for the SingleThreadedExecutor (except for rmw_zenoh).
-
As we step through the benchmarks, I’ll point out any differences between the executors as they appear.
CPU Usage - Pub/Sub - Single Process
We can see that the max y axis for the second graph is way higher due to CycloneDDS seemingly causing the test to consume way more CPU at higher message payloads, amidst otherwise highly comparable results. This difference in CPU for CycloneDDS specifically was consistent across all runs of the benchmark suite.
CPU Usage - Pub Sub - Multi Process
Interestingly, in multi-process mode the climb to ~4-5% of a core at larger payload sizes is now consistent for both executors when running CycloneDDS. Otherwise, both executors seem to put up similar results here.
CPU Usage - Services / Clients - Single Process
CPU Usage - Pub/Sub - Long Running Test (10m)
The usage pattern for both executors appears fairly similar, with the EventsExecutor averaging around 0.05 - 0.1% less CPU usage than EventsCBGExecutor in most runs.
CPU Usage - Services / Clients - Long Running Test (10m)
CPU Usage - Actions - Long Running Test (10m)
We again see a similar usage pattern between the two executors, but with the EventsCBGExecutor consistently maxing out at ~2% less CPU than the EventsExecutor and with a smoother looking graph.
Publisher Latency - Single Process
Subscriber Latency - Single Process
Huge differences in max latency aside, we see comparable results here between the two executor implementations for both pub and sub latency. The mean comparison demonstrates extremely similar results, including CycloneDDS’s extreme latency increases at higher payload sizes. The latency increases appear to exaggerate with slightly smaller payloads in EventsCBGExecutor than in EventsExecutor.
Publisher Latency - Multi Process
Subscriber Latency - Multi Process
Publisher Latency - Long Test (10m)
Subscriber Latency - Long Test (10m)
Memory Scaling Comparison
RAM Usage - Pub/Sub - Long Test (10m)
rclcpp::experimental::EventsExecutor | cm_executors::EventsCBGExecutor |
---|---|
rclcpp::SingleThreadedExecutor | rclcpp::MultiThreadedExecutor |
Not much difference between the two events executors. This appears to expose a slow climbing memory leak in the client library side, either with both of these executor implementations or in some other part of the code. This leak appears consistent across all RMWs and across all runs of all four executors (single threaded, multi threaded, EventsExecutor, EventsCBGExecutor). Zenoh without intraprocess shows a much sharper increase the first few minutes in.
RAM Usage - Services/Clients - Long Test (10m)
rclcpp::experimental::EventsExecutor | cm_executors::EventsCBGExecutor |
---|---|
rclcpp::SingleThreadedExecutor | rclcpp::MultiThreadedExecutor |
Not much different across the executors, with the multi-threaded executor exhibiting much higher overall baselines in RAM usage. We again see RAM climbing for all four, but the rate of usage appears to level out about 5 or so minutes into the tests.
RAM Usage - Actions - Long Test (10m)
rclcpp::experimental::EventsExecutor | cm_executors::EventsCBGExecutor |
---|---|
rclcpp::SingleThreadedExecutor | rclcpp::MultiThreadedExecutor |
Both EventsExecutor implementations demonstrate significant memory leaks during the long running actions tests. The multi-threaded executor’s usage pattern looks similar to clients / services. In the SingleThreadedExecutor, rmw_zenoh appears to exhibit leaks unlike the other tested RMWs.
7 posts - 4 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ⏳ Regular Priced ROSCon Registration Extended until October 5th!
Hi Everyone,
Great news regarding ROSCon 2025 in Singapore! We’ve extended the regular price ticket sales
. The new deadline for purchasing tickets at the regular price is now Sunday, Mon, Oct 6, 2025 6:59 AM UTC. This extension was made to accommodate our colleagues in Asia, especially those in India, as Singapore’s visa application window for India only opens one month prior to travel. We still recommend that you register as soon as possible as our fantastic ROSCon workshops are starting to sell out and about half of them have less than ten tickets remaining (see the list below).
ROSCon Workshop Status
- Ros2_control: Fun with Robot Drivers – less than ten seats left
- Scalable Multi-Robot Scene workflows using ROS Simulation Interfaces standard in Isaac Sim - SOLD OUT
- Hands-On Aerial Robotics Using PX4 and ROS 2 – many seats available
- ROS 2 Networking Redefined: Deep Dive into RMW Zenoh – less than ten seats left
- ROS 2 & micro-ROS Dive In: Low-Cost Underwater Robotics – less than ten seats left
- How to Implement a Full ROS 2 Application: a Tic-Tac-Toe Player Robot - many seats available
- Introducing AI PCs for Embodied AI – many seats available
- Reinforcement Learning for Deliberation in ROS 2 – less than twenty seats left
- Introduction to ROS and Building Robots with Open-Source Software – many seats available
2 posts - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: SDF to URDF conversion in 2025
Hi all,
it seems URDF, SDF and the conversion between them is a topic that keeps on giving. When I weekend-project created FusionSDF last year, I didn’t expect to actually still need URDFs anymore as SDFs can now be used for robot_description. Turns out I was too optimistic.
In either case, instead of porting 10+ years old ROS 1 code to ROS 2, I decided to leverage the more recent sdformat_urdf to convert SDF to URDF. Thanks to @sloretz, @quarkytale, @ahcorde and others for sdformat_urdf! My tool has the creative name sdf_to_urdf.
It consists of less then 50 lines of code, nearly all of it boilerplate. However, I didn’t find an already existing ROS 2 tool. Would be great to add the functionality directly to sdformat_urdf though (). Hence, here we are: sdf_to_urdf
Best,
Andreas
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Update to vscode_ros2_workspace
I’ve updated athackst/vscode_ros2_workspace so you can use the main
branch as a single template across ROS distributions.
What changed
-
The default branch now supports any ROS version—just set the desired base image in
.devcontainer/Dockerfile
’sFROM
line. The template currently defaults toosrf/ros:jazzy-desktop-full
. -
The repo includes guidance for GUI enablement (X11/Wayland, NVIDIA/WSL2 notes) and non-root user development (UID/GID hints). After building the devcontainer, you’ll see the
ros
user and can adjust UID/GID if needed.
Quick start
-
Click “Use this template” on the repo and create your workspace. The README notes that the default branch works for any ROS by changing the
FROM
line in.devcontainer/Dockerfile
. -
(Optional) Switch ROS versions by setting, e.g.:
# .devcontainer/Dockerfile FROM osrf/ros:humble-desktop-full
-
Open in VS Code – it will build the dev container for you; your terminal user will be
ros
. If you hit X11/Wayland auth or display issues, the README documents fixes (DISPLAY, WAYLAND variables, volumes, NVIDIA/WSL2 notes).
Extras included
-
Preconfigured linters/formatters, tasks, and launch configs.
-
CI workflow you can tailor to your project.
Why this helps
-
One template for all supported ROS 2 distros; simpler upgrades and onboarding.
-
Built-in GUI and non-root guidance improves day-to-day dev experience out of the box.
Feedback welcome
If you try this out—especially on different distros or GPU/WSL2 setups—please share what works and what doesn’t.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: FULL TUTORIAL: Isaac SIM -> isaac_ros_foundationpose -> ManyMove
Hi everyone!
I just published the full isaac_ros_foundationpose pipeline tutorial with custom fine-tuned YOLOV8 model.
Here you find the YOUTUBE VIDEO!
KEY FEATURES
The pipeline includes:
- NVIDIA Isaac SIM 5.0 to generate synthetic data for fine-tuning the YoloV8s model and for a digital twin of the scene to publish a virtual RealSense camera stream and robot data to ROS2
- Ultralytics to fine-tune the YoloV8s model
- isaac_ros_foundationpose to estimate the 6D pose of the object, using custom fine-tuned YoloV8s for object detection
- ManyMove to handle the ROS2 logic and motion planning leveraging MoveIt2 and BehaviorTree.CPP
Hardware:
- NVIDIA Jetson Orin Agx Developer Kit for isaac_ros_foundationpose and ManyMove on ROS2 Humble
- Laptop with RTX card for Isaac SIM
HIGHLIGHTS
- All assets provided to complete the pipeline, from
.usd
files for SDG and scene to.obj
and mesh for FoundationPose - Example executable with ManyMove with behavior tree nodes to rectify FoundationPose output to allow grasping of symmetric objects and to limit pose validity to a specific bounding box: these features stabilize the output and enhance reliability on bin picking applications.
- Full Isaac Sim scene with Ufactory Lite6 cobot and gripper, customized to provide a realistic pneumatic gripper simulation while keeping coherent ROS2 interaction and MoveIt2 path planning.
LINKS
- ManyMove: GitHub - pastoriomarco/manymove: To plan and execute moves with manipulators in ROS2
- Isaac SIM: What Is Isaac Sim? — Isaac Sim Documentation
- isaac_ros_foundationpose: isaac_ros_foundationpose — isaac_ros_docs documentation
- Ultralytics YOLOv8: GitHub - ultralytics/ultralytics: Ultralytics YOLO 🚀
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Millie_bot is an open source robot with a big DREAM
Hey Open Robots Community!
I want to introduce Millie_bot and the Dream Cloud ecosystem I am building on Web3 under $DREAM / SOL.
Millie_bot is a 3D printed, modular, AI robot, built entirely in ROS2 Jammy. The retail / commercial price will be $10-20K and CAD files available online for remote building. I am running on a Pi for Nav and a flutter app runs the LLM, voice, face of the robot.
I want to use the mobile robot to build innovative business strategies, that leverage automation to work for local communities. Concepts include a robot drive-in restaurant called DREAM DINER, a fully automated general store called DREAM STORE, and a larger grocery store called DREAM MARKET. The revenue generated by these businesses will then go to build affordable housing and fund UBI.
This is more than a robot project, but I am building everything myself. I am also live streaming everything on X.com/@nico_andretti so you can come and see for yourself. I already have communities that are invested in $DREAM COIN and want to see this project succeed.
If you want to join a project with a vision for supporting communities as automation replaces workers, this is it!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: PSA: Debian Bookwork Boost rosdep entries
This is to serve as a heads up to all Debian Bookworm users who rely on libboost-*
rosdep entries. If you do not use Debian Bookworm, or don’t use libboost-*
on Debian Bookworm, you can stop reading now.
The attached pull request adds missing entries that were missing in the libboost-*
family of rosdep keys. As a side effect, it aligns all of the libboost versions to 1.74.0, which is the “default” on Debian bookworm.
The following 4 packages will be “downgraded” from 1.81.0 to 1.74.0:
- libboost-date-time
- libboost-python
- libboost-random
- libboost-thread
Since Bookworm is currently a tier 3 platform, we aren’t providing binary packages for it, and very few packages in the core currently depend on libboost, the PMC has determined that this is relatively low risk and has opted to proceed.
Let us know if you have any comments/concerns.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS Meetup Bogotá Colombia - 7 Nov 2025
The second edition of ROS Meetup Bogotá is here!
The ROS community in Colombia gathers once again to share knowledge, connect academia and industry, and continue building the future of robotics in our country.
When and where?
Friday, November 7th, 2025
2:00 PM – 7:00 PM
Biblioteca Virgilio Barco, Bogotá
On-site event with live streaming (link will be shared soon)
What to expect?
-
Technical talks on ROS and ROS 2 with national and international experts.
-
Poster and project fair, showcasing local innovations and research.
-
Networking between academia, industry, and robotics enthusiasts.
-
Coffee break and informal discussions.
-
Themed souvenirs and surprises.
Register as an attendee or apply as a speaker here:
Linktree – RAS Javeriana IEEE
ROS Meetup Bogotá is a space to strengthen the community, foster collaborations, and accelerate the development of innovative robotics projects with ROS/ROS 2.
Organized by:
-
IEEE RAS Javeriana St. Ch.
-
IEEE RAS Colombia
-
IEEE RAS Universidad de los Andes St. Ch.
-
IEEE RAS Universidad Escuela Tecnológica Instituto Técnico Central St. Ch.
-
IEEE RAS Universidad Distrital Francisco José de Caldas St. Ch.
-
Research group SinfonIA – Universidad de los Andes
We look forward to building the future of robotics in Colombia together!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ARIAC 2025 Registration Open - Industrial Robotics Competition Using ROS/Gazebo
Hi ROS Community,
The National Institute of Standards and Technology (NIST) has opened registration for the Agile Robotics for Industrial Automation Competition (ARIAC) 2025. This is an excellent opportunity for ROS developers to apply their skills to realistic industrial automation challenges.
What is ARIAC?
ARIAC is an annual simulation-based competition that tests robotic systems in dynamic manufacturing environments. The competition presents real-world scenarios where things go wrong - equipment malfunctions, part quality issues, and changing production priorities.
2025 Competition Scenario: EV Battery Production
The competition simulates an EV battery production factory.
Production Workflow:
-
Task 1: Inspection and Kit Building - Use LIDAR sensors to inspect battery cells for defects, test voltage levels, and assemble qualified cells into kits on AGV trays
-
Task 2: Module Construction - Take completed kits and construct full battery modules through precise assembly and welding operations
Technical Stack:
-
ROS 2 for system architecture and communication
-
Gazebo simulation environment
-
MoveIt for motion planning and robot control
-
C++/Python for control system development
Why Participate?
-
Practical ROS experience: Work with industrial-scale robotics applications
-
Real-world relevance: EV battery production is a rapidly growing manufacturing sector
-
Problem-solving: Address challenges that mirror actual manufacturing environments
-
Recognition: Prize money available for eligible teams (1st: $10,000, 2nd: $5,000, 3rd: $2,500) - check the website for eligibility requirements
-
Professional development: Experience with automated production systems
Who Should Participate?
-
ROS developers interested in manufacturing automation
-
Academic teams working on robotics research
-
Industry professionals developing automation solutions
-
Anyone wanting to test their ROS skills against realistic challenges
Links:
Timeline:
-
Registration: Open now
-
Smoke Test Submission Deadline: December 8th, 2025
-
Final Submission Deadline: January 2nd, 2026
-
Results announcement: February 2nd, 2026
Questions?
The NIST team is available to provide technical support through the GitHub issues page.
Good luck to all participating teams!
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROSBag MCP Server Research | Cloud Robotics WG Meeting 2025-09-24
Please come and join us for this coming meeting at 1600-1700 UTC on Wednesday 24th September 2025, where we will host a presentation on ROSBag MCP Servers from Lei Fu and Sahar Slimpour, from the Zurich University of Applied Sciences and University of Turku respectively. This research has been shared in a ROS Discourse post, and the authors have agreed to come and tell us more about it.
Previously, the group planned to discuss autonomous anomaly detection. This talk was arranged on short notice. Apologies for the late update.
Please note that the meeting day has changed for the CRWG. Previous meetings were on Monday; they are now on Wednesday at the same time.
Last meeting was skipped, but we did publish a progress post on the sessions we’ve been hosting. If you’re interested in what we’ve been doing and would like to give feedback, please take a look at the post.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
2 posts - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Upcoming webinar on using the ouster-ros2 driver
Hi everyone,
If you are interested to learn about Ouster sensors and the provided driver for ros2 then join us next for the upcoming webinar which will go over the basic steps of getting started and then explore some of the unique features that it has to offer
At the end of the webinar there will be a QA session.
Note that this will be one of multiple sessions to come covering the various aspect of Ouster sensors and their usage with ROS.
Hope to see you there!
2 posts - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: New packages for Humble Hawksbill 2025-09-19
Package Updates for humble
Added Packages [29]:
- ros-humble-adi-imu: 1.0.0-1
- ros-humble-adi-imu-dbgsym: 1.0.0-1
- ros-humble-at-sonde-ros-driver: 1.0.0-1
- ros-humble-at-sonde-ros-driver-dbgsym: 1.0.0-1
- ros-humble-ewellix-description: 0.1.1-1
- ros-humble-ewellix-interfaces: 0.1.1-1
- ros-humble-ewellix-interfaces-dbgsym: 0.1.1-1
- ros-humble-ewellix-lift-common: 0.1.1-1
- ros-humble-ewellix-moveit-config: 0.1.1-1
- ros-humble-ewellix-sim: 0.1.1-1
- ros-humble-ewellix-viz: 0.1.1-1
- ros-humble-husarion-ugv-description: 2.2.2-1
- ros-humble-husarion-ugv-msgs: 2.2.2-1
- ros-humble-husarion-ugv-msgs-dbgsym: 2.2.2-1
- ros-humble-kuka-gazebo: 1.0.0-1
- ros-humble-kuka-gazebo-dbgsym: 1.0.0-1
- ros-humble-leo-filters: 1.6.0-1
- ros-humble-leo-filters-dbgsym: 1.6.0-1
- ros-humble-namosim: 0.0.3-1
- ros-humble-network-bridge: 2.0.0-1
- ros-humble-network-bridge-dbgsym: 2.0.0-1
- ros-humble-off-highway-premium-radar: 0.9.0-1
- ros-humble-off-highway-premium-radar-dbgsym: 0.9.0-1
- ros-humble-off-highway-premium-radar-msgs: 0.9.0-1
- ros-humble-off-highway-premium-radar-msgs-dbgsym: 0.9.0-1
- ros-humble-tf2-web-republisher: 1.0.0-1
- ros-humble-tf2-web-republisher-dbgsym: 1.0.0-1
- ros-humble-tf2-web-republisher-interfaces: 1.0.0-1
- ros-humble-tf2-web-republisher-interfaces-dbgsym: 1.0.0-1
Updated Packages [308]:
- ros-humble-ackermann-steering-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-ackermann-steering-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-admittance-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-admittance-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-apriltag: 3.4.4-1 → 3.4.5-1
- ros-humble-apriltag-dbgsym: 3.4.4-1 → 3.4.5-1
- ros-humble-apriltag-ros: 3.2.2-3 → 3.3.0-1
- ros-humble-apriltag-ros-dbgsym: 3.2.2-3 → 3.3.0-1
- ros-humble-async-web-server-cpp: 2.0.0-3 → 2.0.1-1
- ros-humble-async-web-server-cpp-dbgsym: 2.0.0-3 → 2.0.1-1
- ros-humble-automatika-embodied-agents: 0.4.1-1 → 0.4.2-1
- ros-humble-automatika-embodied-agents-dbgsym: 0.4.1-1 → 0.4.2-1
- ros-humble-automatika-ros-sugar: 0.3.1-1 → 0.3.2-1
- ros-humble-automatika-ros-sugar-dbgsym: 0.3.1-1 → 0.3.2-1
- ros-humble-bicycle-steering-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-bicycle-steering-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-camera-ros: 0.4.0-1 → 0.5.0-1
- ros-humble-camera-ros-dbgsym: 0.4.0-1 → 0.5.0-1
- ros-humble-catch-ros2: 0.2.1-1 → 0.2.2-1
- ros-humble-catch-ros2-dbgsym: 0.2.1-1 → 0.2.2-1
- ros-humble-clearpath-common: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-control: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-customization: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-description: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-generator-common: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-generator-common-dbgsym: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-manipulators: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-manipulators-description: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-mounts-description: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-platform-description: 1.3.5-1 → 1.3.6-1
- ros-humble-clearpath-sensors-description: 1.3.5-1 → 1.3.6-1
- ros-humble-compressed-depth-image-transport: 2.5.3-1 → 2.5.4-1
- ros-humble-compressed-depth-image-transport-dbgsym: 2.5.3-1 → 2.5.4-1
- ros-humble-compressed-image-transport: 2.5.3-1 → 2.5.4-1
- ros-humble-compressed-image-transport-dbgsym: 2.5.3-1 → 2.5.4-1
- ros-humble-controller-interface: 2.51.0-1 → 2.52.0-1
- ros-humble-controller-interface-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-controller-manager: 2.51.0-1 → 2.52.0-1
- ros-humble-controller-manager-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-controller-manager-msgs: 2.51.0-1 → 2.52.0-1
- ros-humble-controller-manager-msgs-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-diff-drive-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-diff-drive-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-dynamixel-hardware-interface: 1.4.11-1 → 1.4.14-1
- ros-humble-dynamixel-hardware-interface-dbgsym: 1.4.11-1 → 1.4.14-1
- ros-humble-effort-controllers: 2.49.1-1 → 2.50.0-1
- ros-humble-effort-controllers-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-ffmpeg-encoder-decoder: 2.0.1-1 → 3.0.1-1
- ros-humble-ffmpeg-encoder-decoder-dbgsym: 2.0.1-1 → 3.0.1-1
- ros-humble-ffmpeg-image-transport: 2.0.3-1 → 3.0.2-1
- ros-humble-ffmpeg-image-transport-dbgsym: 2.0.3-1 → 3.0.2-1
- ros-humble-ffmpeg-image-transport-tools: 2.1.2-1 → 3.0.1-1
- ros-humble-ffmpeg-image-transport-tools-dbgsym: 2.1.2-1 → 3.0.1-1
- ros-humble-force-torque-sensor-broadcaster: 2.49.1-1 → 2.50.0-1
- ros-humble-force-torque-sensor-broadcaster-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-forward-command-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-forward-command-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-foxglove-compressed-video-transport: 1.0.3-1 → 3.0.1-1
- ros-humble-foxglove-compressed-video-transport-dbgsym: 1.0.3-1 → 3.0.1-1
- ros-humble-franka-inria-inverse-dynamics-solver: 1.0.1-1 → 1.0.2-1
- ros-humble-franka-inria-inverse-dynamics-solver-dbgsym: 1.0.1-1 → 1.0.2-1
- ros-humble-gpio-controllers: 2.49.1-1 → 2.50.0-1
- ros-humble-gpio-controllers-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-gripper-controllers: 2.49.1-1 → 2.50.0-1
- ros-humble-gripper-controllers-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-gz-ros2-control: 0.7.15-1 → 0.7.16-1
- ros-humble-gz-ros2-control-dbgsym: 0.7.15-1 → 0.7.16-1
- ros-humble-gz-ros2-control-demos: 0.7.15-1 → 0.7.16-1
- ros-humble-gz-ros2-control-demos-dbgsym: 0.7.15-1 → 0.7.16-1
- ros-humble-gz-ros2-control-tests: 0.7.15-1 → 0.7.16-1
- ros-humble-hardware-interface: 2.51.0-1 → 2.52.0-1
- ros-humble-hardware-interface-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-hardware-interface-testing: 2.51.0-1 → 2.52.0-1
- ros-humble-hardware-interface-testing-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-ign-ros2-control: 0.7.15-1 → 0.7.16-1
- ros-humble-ign-ros2-control-demos: 0.7.15-1 → 0.7.16-1
- ros-humble-ign-ros2-control-demos-dbgsym: 0.7.15-1 → 0.7.16-1
- ros-humble-image-transport-plugins: 2.5.3-1 → 2.5.4-1
- ros-humble-imu-sensor-broadcaster: 2.49.1-1 → 2.50.0-1
- ros-humble-imu-sensor-broadcaster-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-inverse-dynamics-solver: 1.0.1-1 → 1.0.2-1
- ros-humble-inverse-dynamics-solver-dbgsym: 1.0.1-1 → 1.0.2-1
- ros-humble-joint-limits: 2.51.0-1 → 2.52.0-1
- ros-humble-joint-limits-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-joint-state-broadcaster: 2.49.1-1 → 2.50.0-1
- ros-humble-joint-state-broadcaster-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-joint-trajectory-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-joint-trajectory-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-kdl-inverse-dynamics-solver: 1.0.1-1 → 1.0.2-1
- ros-humble-kdl-inverse-dynamics-solver-dbgsym: 1.0.1-1 → 1.0.2-1
- ros-humble-kompass: 0.3.0-1 → 0.3.1-1
- ros-humble-kompass-interfaces: 0.3.0-1 → 0.3.1-1
- ros-humble-kompass-interfaces-dbgsym: 0.3.0-1 → 0.3.1-1
- ros-humble-kuka-agilus-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-cybertech-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-external-control-sdk: 1.3.1-1 → 1.4.1-1
- ros-humble-kuka-external-control-sdk-examples: 1.3.1-1 → 1.4.1-1
- ros-humble-kuka-fortec-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-iontec-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-kr-moveit-config: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-lbr-iisy-moveit-config: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-lbr-iisy-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-lbr-iiwa-moveit-config: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-lbr-iiwa-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-mock-hardware-interface: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-mock-hardware-interface-dbgsym: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-quantec-support: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-resources: 0.9.0-2 → 1.0.0-1
- ros-humble-kuka-robot-descriptions: 0.9.0-2 → 1.0.0-1
- ros-humble-launch-pal: 0.14.1-1 → 0.17.0-1
- ros-humble-launch-ros: 0.19.10-1 → 0.19.12-1
- ros-humble-launch-testing-ros: 0.19.10-1 → 0.19.12-1
- ros-humble-leo-bringup: 1.5.0-1 → 1.6.0-1
- ros-humble-leo-fw: 1.5.0-1 → 1.6.0-1
- ros-humble-leo-fw-dbgsym: 1.5.0-1 → 1.6.0-1
- ros-humble-leo-robot: 1.5.0-1 → 1.6.0-1
- ros-humble-libmavconn: 2.10.1-1 → 2.12.0-1
- ros-humble-libmavconn-dbgsym: 2.10.1-1 → 2.12.0-1
- ros-humble-mapviz: 2.5.8-1 → 2.5.10-1
- ros-humble-mapviz-dbgsym: 2.5.8-1 → 2.5.10-1
- ros-humble-mapviz-interfaces: 2.5.8-1 → 2.5.10-1
- ros-humble-mapviz-interfaces-dbgsym: 2.5.8-1 → 2.5.10-1
- ros-humble-mapviz-plugins: 2.5.8-1 → 2.5.10-1
- ros-humble-mapviz-plugins-dbgsym: 2.5.8-1 → 2.5.10-1
- ros-humble-mavlink: 2025.6.6-1 → 2025.9.9-1
- ros-humble-mavros: 2.10.1-1 → 2.12.0-1
- ros-humble-mavros-dbgsym: 2.10.1-1 → 2.12.0-1
- ros-humble-mavros-extras: 2.10.1-1 → 2.12.0-1
- ros-humble-mavros-extras-dbgsym: 2.10.1-1 → 2.12.0-1
- ros-humble-mavros-msgs: 2.10.1-1 → 2.12.0-1
- ros-humble-mavros-msgs-dbgsym: 2.10.1-1 → 2.12.0-1
- ros-humble-mecanum-drive-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-mecanum-drive-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-message-tf-frame-transformer: 1.1.1-1 → 1.1.3-1
- ros-humble-message-tf-frame-transformer-dbgsym: 1.1.1-1 → 1.1.3-1
- ros-humble-mola-common: 0.4.1-1 → 0.5.1-1
- ros-humble-mola-imu-preintegration: 1.9.0-1 → 1.10.0-1
- ros-humble-mola-imu-preintegration-dbgsym: 1.9.0-1 → 1.10.0-1
- ros-humble-mola-lidar-odometry: 0.8.0-1 → 0.9.0-1
- ros-humble-mola-lidar-odometry-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-mola-state-estimation: 1.9.0-1 → 1.10.0-1
- ros-humble-mola-state-estimation-simple: 1.9.0-1 → 1.10.0-1
- ros-humble-mola-state-estimation-simple-dbgsym: 1.9.0-1 → 1.10.0-1
- ros-humble-mola-state-estimation-smoother: 1.9.0-1 → 1.10.0-1
- ros-humble-mola-state-estimation-smoother-dbgsym: 1.9.0-1 → 1.10.0-1
- ros-humble-mp2p-icp: 1.7.1-1 → 1.8.0-1
- ros-humble-mp2p-icp-dbgsym: 1.7.1-1 → 1.8.0-1
- ros-humble-mqtt-client: 2.4.0-1 → 2.4.1-1
- ros-humble-mqtt-client-dbgsym: 2.4.0-1 → 2.4.1-1
- ros-humble-mqtt-client-interfaces: 2.4.0-1 → 2.4.1-1
- ros-humble-mqtt-client-interfaces-dbgsym: 2.4.0-1 → 2.4.1-1
- ros-humble-mrpt-apps: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-apps-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libapps: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libapps-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libbase: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libbase-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libgui: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libgui-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libhwdrivers: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libhwdrivers-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libmaps: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libmaps-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libmath: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libmath-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libnav: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libnav-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libobs: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libobs-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libopengl: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libopengl-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libposes: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libposes-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libros-bridge: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libros-bridge-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libslam: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libslam-dbgsym: 2.14.9-1 → 2.14.12-1
- ros-humble-mrpt-libtclap: 2.14.9-1 → 2.14.12-1
- ros-humble-multires-image: 2.5.8-1 → 2.5.10-1
- ros-humble-multires-image-dbgsym: 2.5.8-1 → 2.5.10-1
- ros-humble-off-highway-can: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-can-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-general-purpose-radar: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-general-purpose-radar-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-general-purpose-radar-msgs: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-general-purpose-radar-msgs-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-premium-radar-sample: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-premium-radar-sample-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-premium-radar-sample-msgs: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-premium-radar-sample-msgs-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-radar: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-radar-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-radar-msgs: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-radar-msgs-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-sensor-drivers: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-sensor-drivers-examples: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-sensor-drivers-examples-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-uss: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-uss-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-uss-msgs: 0.8.0-1 → 0.9.0-1
- ros-humble-off-highway-uss-msgs-dbgsym: 0.8.0-1 → 0.9.0-1
- ros-humble-ompl: 1.7.0-3 → 1.7.0-4
- ros-humble-ompl-dbgsym: 1.7.0-3 → 1.7.0-4
- ros-humble-pid-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-pid-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-pose-broadcaster: 2.49.1-1 → 2.50.0-1
- ros-humble-pose-broadcaster-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-position-controllers: 2.49.1-1 → 2.50.0-1
- ros-humble-position-controllers-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-range-sensor-broadcaster: 2.49.1-1 → 2.50.0-1
- ros-humble-range-sensor-broadcaster-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-rclcpp: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-action: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-action-dbgsym: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-components: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-components-dbgsym: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-dbgsym: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-lifecycle: 16.0.14-1 → 16.0.15-1
- ros-humble-rclcpp-lifecycle-dbgsym: 16.0.14-1 → 16.0.15-1
- ros-humble-rcutils: 5.1.6-1 → 5.1.7-1
- ros-humble-rcutils-dbgsym: 5.1.6-1 → 5.1.7-1
- ros-humble-rmw-zenoh-cpp: 0.1.4-1 → 0.1.6-1
- ros-humble-rmw-zenoh-cpp-dbgsym: 0.1.4-1 → 0.1.6-1
- ros-humble-robot-localization: 3.5.3-1 → 3.5.4-1
- ros-humble-robot-localization-dbgsym: 3.5.3-1 → 3.5.4-1
- ros-humble-robotraconteur: 1.2.5-1 → 1.2.6-1
- ros-humble-robotraconteur-dbgsym: 1.2.5-1 → 1.2.6-1
- ros-humble-ros2-control: 2.51.0-1 → 2.52.0-1
- ros-humble-ros2-control-test-assets: 2.51.0-1 → 2.52.0-1
- ros-humble-ros2-controllers: 2.49.1-1 → 2.50.0-1
- ros-humble-ros2-controllers-test-nodes: 2.49.1-1 → 2.50.0-1
- ros-humble-ros2action: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2cli: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2cli-test-interfaces: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2cli-test-interfaces-dbgsym: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2component: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2controlcli: 2.51.0-1 → 2.52.0-1
- ros-humble-ros2doctor: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2interface: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2launch: 0.19.10-1 → 0.19.12-1
- ros-humble-ros2lifecycle: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2lifecycle-test-fixtures: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2lifecycle-test-fixtures-dbgsym: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2multicast: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2node: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2param: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2pkg: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2run: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2service: 0.18.13-1 → 0.18.14-1
- ros-humble-ros2topic: 0.18.13-1 → 0.18.14-1
- ros-humble-rqt-controller-manager: 2.51.0-1 → 2.52.0-1
- ros-humble-rqt-dotgraph: 0.0.4-1 → 0.0.5-1
- ros-humble-rqt-joint-trajectory-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-rviz-assimp-vendor: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-common: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-common-dbgsym: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-default-plugins: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-default-plugins-dbgsym: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-ogre-vendor: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-ogre-vendor-dbgsym: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-rendering: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-rendering-dbgsym: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-rendering-tests: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz-visual-testing-framework: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz2: 11.2.19-1 → 11.2.20-1
- ros-humble-rviz2-dbgsym: 11.2.19-1 → 11.2.20-1
- ros-humble-septentrio-gnss-driver: 1.4.4-1 → 1.4.5-1
- ros-humble-septentrio-gnss-driver-dbgsym: 1.4.4-1 → 1.4.5-1
- ros-humble-simulation-interfaces: 1.0.1-1 → 1.1.0-1
- ros-humble-simulation-interfaces-dbgsym: 1.0.1-1 → 1.1.0-1
- ros-humble-steering-controllers-library: 2.49.1-1 → 2.50.0-1
- ros-humble-steering-controllers-library-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-theora-image-transport: 2.5.3-1 → 2.5.4-1
- ros-humble-theora-image-transport-dbgsym: 2.5.3-1 → 2.5.4-1
- ros-humble-tile-map: 2.5.8-1 → 2.5.10-1
- ros-humble-tile-map-dbgsym: 2.5.8-1 → 2.5.10-1
- ros-humble-transmission-interface: 2.51.0-1 → 2.52.0-1
- ros-humble-transmission-interface-dbgsym: 2.51.0-1 → 2.52.0-1
- ros-humble-tricycle-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-tricycle-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-tricycle-steering-controller: 2.49.1-1 → 2.50.0-1
- ros-humble-tricycle-steering-controller-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-turtle-nest: 1.1.0-1 → 1.2.0-1
- ros-humble-turtle-nest-dbgsym: 1.1.0-1 → 1.2.0-1
- ros-humble-ur-client-library: 2.2.0-1 → 2.3.0-1
- ros-humble-ur-client-library-dbgsym: 2.2.0-1 → 2.3.0-1
- ros-humble-ur-description: 2.6.0-1 → 2.7.0-1
- ros-humble-ur10-inverse-dynamics-solver: 1.0.1-1 → 1.0.2-1
- ros-humble-ur10-inverse-dynamics-solver-dbgsym: 1.0.1-1 → 1.0.2-1
- ros-humble-velocity-controllers: 2.49.1-1 → 2.50.0-1
- ros-humble-velocity-controllers-dbgsym: 2.49.1-1 → 2.50.0-1
- ros-humble-web-video-server: 2.1.0-1 → 2.1.1-1
- ros-humble-web-video-server-dbgsym: 2.1.0-1 → 2.1.1-1
- ros-humble-xacro: 2.0.13-1 → 2.1.1-1
- ros-humble-yasmin: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-dbgsym: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-demos: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-demos-dbgsym: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-msgs: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-ros: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-ros-dbgsym: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-viewer: 3.3.0-1 → 3.4.0-1
- ros-humble-yasmin-viewer-dbgsym: 3.3.0-1 → 3.4.0-1
- ros-humble-zenoh-cpp-vendor: 0.1.4-1 → 0.1.6-1
- ros-humble-zenoh-cpp-vendor-dbgsym: 0.1.4-1 → 0.1.6-1
- ros-humble-zenoh-security-tools: 0.1.4-1 → 0.1.6-1
- ros-humble-zenoh-security-tools-dbgsym: 0.1.4-1 → 0.1.6-1
Removed Packages [0]:
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
- Adam Dabrowski
- Aditya Pande
- Alejandro Hernández
- Alexander Xydes
- Analog Devices
- Aron Svastits
- Automatika Robotics
- Bence Magyar
- Bernd Pfrommer
- Błażej Sowa
- Chris Lalancette
- Christian Rauch
- David Brown
- Enrico Ferrentino
- Ethan Brown
- Felix Exner
- Fictionlab
- Gergely Kovacs
- Husarion
- Ivan Paunovic
- Jacob Perron
- Janne Karttunen
- John Wason
- Jordan Palacios
- Jose-Luis Blanco-Claraco
- Kenji Brameld
- Lennart Reiher
- Luis Camero
- MA Song
- Mark Moll
- Max Krogius
- Miguel Ángel González Santamarta
- Nick Morales
- Pyo
- Robert Haschke
- Robin Petereit
- Sarah Huber
- Southwest Research Institute
- Tibor Dome
- Timo Röhling
- Tom Moore
- Vladimir Ermakov
- Yadunund
- miguel
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Nine More Robot Arms Now Have ROS 2 Drivers, Including FANUC and Kawasaki
Since our last update in May, we’ve identified nine new robot arm OEMs with ROS 2 drivers. This reflects the growing momentum of ROS 2 as the interoperability standard across the robotics industry. You can always track the latest progress on PickNik’s ROS 2 Compatible Hardware database.
New OEMs with ROS 2 drivers:
-
FANUC
-
Kawasaki
-
Neura
-
Kassow
-
Delta
-
Dorna
-
Aubo
-
Fairino
-
Hanwha
We’re especially encouraged to see high-quality, up-to-date ROS 2 drivers now available for FANUC and Kawasaki models. These industrial manipulators now support high-bandwidth streaming control via ROS Control, which is a major step forward for both the robotics community and users of MoveIt Pro.
At PickNik, we continue to support the development and use of ROS 2 drivers for robotics programs. And with our ROS-powered MoveIt Pro platform, we provide a complete solution that helps teams get the most from these new hardware integrations.
p.s. we also have a new robotic arm database for space rated robot arms, most of which use ROS.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)