Throughput Control, Heartbeat Period and Flow Controllers #3247
Unanswered
BrunoDatoMeneses
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm having trouble setting flow controlers and the heartbeat period.
I'm designing a multi-agent system and I use fast-dds so that my agents can communicate. Each agent has several publishers and subribers and I would like to controle the overall throughput. I'm using wireshark to monitor the packets (fast dds monitor gives measures only every 3s).
I tested to change the heartbeat period as explained in https://fast-dds.docs.eprosima.com/en/latest/fastdds/use_cases/large_data/large_data.html#tuning-heartbeat-period
Running only one agent, I experienced no change in the througput (see figures below for 200, 3000 and 10000 ms)
With a smaller time update we can see huge peaks and nothing between them.
I also tested with the flow controlers as explained in https://fast-dds.docs.eprosima.com/en/latest/fastdds/use_cases/large_data/large_data.html#flow-controllers
Also running only one agent, I experience no change in the throughput (see figues below for 1000, 500, 100, 50 bps limits)
The higher thoughput before every periodic peaks are the discoveries.
MY CODE:
//ON THE DOMAIN PARTICIPANT
// Limit bps
static const char* flow_controller_name = "custom_flow_controller";
auto custom_flow_control = std::make_sharedeprosima::fastdds::rtps::FlowControllerDescriptor();
custom_flow_control->name = flow_controller_name;
custom_flow_control->scheduler = eprosima::fastdds::rtps::FlowControllerSchedulerPolicy::FIFO;
custom_flow_control->max_bytes_per_period =PUBLISHERS_BPS_LIMIT / 10;
custom_flow_control->period_ms = 100;
DomainParticipantQos participantQos;
participantQos.name("Domain_Participant_EZ_Chains_Agent " + std::to_string(ID_));
participantQos.flow_controllers().push_back(custom_flow_control);
// Increase the buffer size
participantQos.transport().send_socket_buffer_size = 1048576;
participantQos.transport().listen_socket_buffer_size = 4194304;
domainParticipant_ = DomainParticipantFactory::get_instance()->create_participant(0, participantQos);
// ON EVERY PUBLISHER
PublisherQos publisherQos = PUBLISHER_QOS_DEFAULT;
publisher_ = participant_->create_publisher(publisherQos, nullptr);
// Create the DataWriter
PublishModeQosPolicy publish_mode;
publish_mode.kind = ASYNCHRONOUS_PUBLISH_MODE;
DataWriterQos dataWriterQos = DATAWRITER_QOS_DEFAULT;
dataWriterQos.publish_mode(publish_mode);
dataWriterQos.publish_mode().flow_controller_name = "custom_flow_controller";
RTPSReliableWriterQos reliable_writer_qos;
reliable_writer_qos.times.heartbeatPeriod = {HEARTBEAT_PERIOD_SECONDS,HEARTBEAT_PERIOD_NANO_SECONDS};
dataWriterQos.reliable_writer_qos(reliable_writer_qos);
writer_ = publisher_->create_datawriter(topic_, dataWriterQos, &listener_);
My questions are:
Is it possible to lower those peaks by distributing the throughput using the heartbeat period or the flow controlers?
Am I doing something wrong ? Are there some QoS than can be in conflict with these ones ?
Thank you,
Bruno
Beta Was this translation helpful? Give feedback.
All reactions