Hmmm I'm not the OP, but I run my personal site on a kubernetes cluster hosted in bhyve VMs running Debian on a FreeBSD machine using netgraph for the networking. I just tested by launching iperf3 on the FreeBSD host and launching an alpine linux pod in the cluster, and I only got ~4Gbit/s. This is surprising to me since netgraph is supposed to be capable of much faster networking but I guess this is going through multiple additional layers that may have slowed it down (off the top of my head: kubernetes with flannel, iptables in the VM, bhyve, and pf on the FreeBSD host).
Thanks, but I am not using if_bridge. I am creating a netgraph bridge[0] which is connected to the host via a netgraph eiface. Then on the host, the packet passes to my real physical interface because I have gateway_enable and I have pf perform NAT[1]. It looks like that blog post connected the netgraph bridge directly to the external interface, so my guess is my slowdown is from either pf performing NAT or the packet forwarding from gateway_enable.