[Documentation] [TitleIndex] [WordIndex

Publishing to a topic

See also: rospy.Publisher Code API

You can create a handle to publish messages to a topic using the rospy.Publisher class. The most common usage for this is to provide the name of the topic and the message class/type of the topic. You can then call publish() on that handle to publish a message, e.g.:

pub = rospy.Publisher('topic_name', std_msgs.msg.String, queue_size=10)
pub.publish(std_msgs.msg.String("foo"))

rospy.Publisher initialization

rospy.Publisher(topic_name, msg_class, queue_size)

Publisher.publish()

There are three different ways of calling publish() ranging from an explicit style, where you provide a Message instance, to two implicit styles that create the Message instance on the fly.

Explicit style

Implicit style with in-order arguments

Implicit style with keyword arguments

queue_size: publish() behavior and queuing

publish() in rospy is synchronous by default (for backward compatibility reasons) which means that the invocation is blocking until:

If any of the connections has connectivity problems that might lead to publish() blocking for an indefinite amount of time. This is a common problem when subscribing to topics via a wireless connection.

As of Hydro it is recommended to use the new asynchronous publishing behavior which is more in line with the behavior of roscpp.

In order to use the new behavior the keyword argument queue_size must be passed to subscribe() which defines the maximum queue size before messages are being dropped.

While the serialization will still happen synchronously when publish() is being invoked writing the serialized data to each subscribers connection will happen asynchronously from different threads. As a result only the subscribers having connectivity problems will not receive new messages.

If you are publishing faster than rospy can send the messages over the wire, rospy will start dropping old messages.

Note that there may also be an OS-level queue at the transport level, such as the TCP/UDP send buffer.

Choosing a good queue_size

It is hard to provide a rule of thumb for what queue size is best for your application, as it depends on many variables of your system. Still, for beginners who do not care much about message passing, here we provide some guidelines.

If you're just sending one message at a fixed rate it is fine to use a queue size as small as the frequency of the publishing.

If you are sending multiple messages in a burst you should make sure that the queue size is big enough to contain all those messages. Otherwise it is likely to lose messages.

Generally speaking using a bigger queue_size will only use more memory when you are actually behind with the processing - so it is recommended to pick a value which is bigger than it needs to be rather than a too small value.

But if your queue is much larger than it needs to be that will queue up a lot of messages if a subscriber is lagging behind. This might lead to messages arriving with large latency since all messages will be delivered in FIFO order to the subscriber once it catches up.

queue_size Omitted

If the keyword argument is omitted, None is passed or for Groovy and older ROS distributions the publishing is handled synchronously. As of Indigo not passing the keyword argument queue_size will result in a warning being printed to the console.

queue_size None

Not recommended. Publishing is handled synchronously which means that one blocking subscriber will block all publishing. As of Indigo passing None will result in a warning being printed to the console.

queue_size Zero

While a value of 0 means an infinite queue, this can be dangerous since the memory usage can grow infinitely and is therefore not recommended.

queue_size One, Two, Three

If your system is not overloaded you could argue that a queued message should be picked up by the dispatcher thread within a tenth of a second. So a queue size of 1 / 2 / 3 would be absolutely fine when using 10 Hz.

Setting the queue_size to 1 is a valid approach if you want to make sure that a new published value will always prevent any older not yet sent values to be dropped. This is good for, say, a sensor that only cares about the latest measurement. e.g. never send older measurements if a newer one exists.

queue_size Ten or More

An example of when to use a large queue size, such as 10 or greater, is user interface messages (e.g. digital_io, a push button status) that would benefit from a larger queue_size to prevent missing a change in value. Another example is when you want to record all published values including the ones which would be dropped when publishing with a high rate / small queue size.

Complete example

   1 import rospy
   2 from std_msgs.msg import String
   3 
   4 pub = rospy.Publisher('topic_name', String, queue_size=10)
   5 rospy.init_node('node_name')
   6 r = rospy.Rate(10) # 10hz
   7 while not rospy.is_shutdown():
   8    pub.publish("hello world")
   9    r.sleep()

Subscribing to a topic

See also: rospy.Subscriber Code API

   1 import rospy
   2 from std_msgs.msg import String
   3 
   4 def callback(data):
   5     rospy.loginfo("I heard %s",data.data)
   6     
   7 def listener():
   8     rospy.init_node('node_name')
   9     rospy.Subscriber("chatter", String, callback)
  10     # spin() simply keeps python from exiting until this node is stopped
  11     rospy.spin()

Connection Information

A subscriber can get access to a "connection header", which includes debugging information such as who sent the message, as well information like whether or not a message was latched. This data is stored as the _connection_header field of a received message.

e.g.

print m._connection_header
{'callerid': '/talker_38321_1284999593611',
 'latching': '0',
 'md5sum': '992ce8a1687cec8c8bd883ec73ca41d1',
 'message_definition': 'string data\n\n',
 'topic': '/chatter',
 'type': 'std_msgs/String'}

We do not recommend using callerid information beyond debugging purposes, as it can lead to brittle code in an anonymous publish/subscribe architecture.


2024-12-07 18:17