1.Cubic
(1)cwnd降低的情况
static inline bool tcp_in_cwnd_reduction(const struct sock *sk)
{
return (TCPF_CA_CWR | TCPF_CA_Recovery) &
(1 << inet_csk(sk)->icsk_ca_state);
}
static void tcp_cong_control(struct sock *sk, u32 ack, u32 acked_sacked,
int flag, const struct rate_sample *rs)
{
if (tcp_in_cwnd_reduction(sk)) {
/* Reduce cwnd if state mandates */
tcp_cwnd_reduction(sk, acked_sacked, flag);
}
}
在得到第一个SACK包进入了TCPF_CA_Recovery之后,Cubic只有在等到SACK段的包重传成功之后,才会再增加,不然会一直持续减少
(2)cwnd增加的情况
1、bictcp_recalc_ssthresh()
2、bictcp_cong_avoid()
3、bictcp_update()
bictcp_recalc_ssthresh会调整tp->snd_ssthresh,而tp->snd_ssthresh在tcp_cwnd_reduction中又影响到tp->snd_cwnd,而tp->snd_cwnd又会在bictcp_recalc_ssthresh影响ca->last_max_cwnd,这样一轮,在高丢包率之下,tp->snd_cwnd会减少到很小,这样就导致高丢包率下的速度被拿捏的死死的
2.BBR
(1)cwnd降低的情况
static bool bbr_set_cwnd_to_recover_or_restore(
struct sock *sk, const struct rate_sample *rs, u32 acked, u32 *new_cwnd)
{
if (state == TCP_CA_Recovery && prev_state != TCP_CA_Recovery) {
/* Starting 1st round of Recovery, so do packet conservation. */
/* start round now */
/* Cut unused cwnd from app behavior, TSQ, or TSO deferral: */
}
}
(2)cwnd增加的情况
static bool bbr_set_cwnd_to_recover_or_restore(
struct sock *sk, const struct rate_sample *rs, u32 acked, u32 *new_cwnd)
{
if (prev_state >= TCP_CA_Recovery && state < TCP_CA_Recovery) {
/* Exiting loss recovery; restore cwnd saved before recovery. */
cwnd = max(cwnd, bbr->prior_cwnd);
}
}
1、state == TCP_CA_Recovery && prev_state != TCP_CA_Recovery使得BBR在得到SACK之后不会无限减少(能得到SACK说明通信还是健在的=-=)
2、bbr->prior_cwnd保证了BBR在丢包之后的恢复情况
实验情况:
实验环境:
ubuntu 18.04、curl抓取1.4G文档、1.5%的丢包率
丢包率:sudo tc qdisc replace dev wlo1 root netem loss 1.5%
1.Cubic
(1)在执行curl抓取文档的指令后,等待30s,再执行丢包指令,传输为7000kb/s
(2)在执行丢包指令后,执行curl抓取文档,传输为7000kb/s
正常情况下,传输为18mb/s
2.BBR
(1)在执行curl抓取文档的指令后,等待30s,再执行丢包指令,传输为18mb/s
(2)在执行丢包指令后,执行curl抓取文档,传输为7000kb/s
正常情况下,传输为18mb/s
阅读(1687) | 评论(0) | 转发(0) |