DescriptionIn the Go
net/icmp package, the
ID and
Seq fields of the
Echo struct are defined as
int types. However, in the
Marshal method, these fields are cast to
uint16. This can lead to data truncation if the values of
ID or
Seq exceed the 16-bit range.
// An Echo represents an ICMP echo request or reply message body.
type Echo struct {
ID int // identifier
Seq int // sequence number
Data []byte // data
}
// Len implements the Len method of MessageBody interface.
func (p *Echo) Len(proto int) int {
if p == nil {
return 0
}
return 4 + len(p.Data)
}
// Marshal implements the Marshal method of MessageBody interface.
func (p *Echo) Marshal(proto int) ([]byte, error) {
b := make([]byte, 4+len(p.Data))
binary.BigEndian.PutUint16(b[:2], uint16(p.ID))
binary.BigEndian.PutUint16(b[2:4], uint16(p.Seq))
copy(b[4:], p.Data)
return b, nil
}
Design IntentI noticed that the ID and Seq fields are defined as int, which is not strictly aligned with the ICMP protocol specification (RFC 792) that requires these fields to be 16-bit. Could you please clarify the original intent behind defining these fields as int instead of uint16? Specifically:
SummaryI believe changing the types of ID and Seq to uint16 would make the implementation more consistent with the ICMP protocol specification. Understanding the original design intent would also help the community better align with Go's design philosophy.
Thank you for your attention!