Sharkfest 2018 EU

I’m back from Sharkfest EU 2018 and once again it was a great conference. This time, many core developers and instructors brought their families along, so it felt even more like a family gathering than ever before.

I have written a number of Sharkfest Recaps before, and I felt I needed to do something a little different this time. Instead of talking about sessions of myself and others I’ll shine a light at things that you can’t see at the Retrospective page where you’ll find all the presentation materials and videos. I want to give you some “behind the scenes” stuff that you may not know yet, or that isn’t always available elsewhere but did happen at Sharkfest or around it.

The videos

I don’t know if you’ve ever looked any of the videos of the Sharkfest Retrospective pages. A while ago I got an email from someone asking me if I still had the slides to one of my early presentations, and provided a video link to it. I opened it to find out what the presentation was – and the quality of the video shocked me. It’s so hard to watch and understand the audio, especially since I’m not a native speaker. But this was back in 2011 when Sharkfest was at Stanford:

I know that Vladimir watched one of my other talks FIVE times to be able to understand what I was saying in the video recording. Now, compare that to one that was recorded this year:

That’s a whole different level now (yes, it’s from Sharkfest US, but the recording setup was identical or even superior at Sharkfest EU, because audio was picked up from the speaker microphone directly into the recording). There was a lot of work done by Angelo Spampinato who, as a one man show, records all three rooms with the best quality possible, and keeps improving his setup all the time.

If you wonder why not all videos on the Retrospective page have the camera picture at the lower left, it’s because he can only be at one place at the same time, handling the camera. The two other rooms are only screen- and audio-recorded, which is still a lot more than we had before. And in case you want to ask me why not all talks have videos – every presenters has the option to opt-out of being recorded if they don’t like it or if they show pcap files that contain information they don’t want to be recorded. There is sometimes a fine line between “I can show this to the room of 50 people” and “everybody on the Internet can watch it until the end of time”. If you wonder why that’s a difference – sometimes the sensitive stuff is hard to find, so the room won’t have time for that during the live presentation, but if you can rewind and watch its recording as often as you like it’s a different story.

So let’s please have a virtual round of applause for Angelo, doing a great job!

Packet Doctors

The “Packet Doctors” session is a very special slot in the agenda that appeared for the first time at Sharkfest 2017 US in Pittsburgh. A couple of presenters had toyed with the idea of doing live packet analysis in front of an audience for quite some time, and last year we gave it a try.

The packet doctors at work – Sake, Landi, me (left to right)

The basic idea is this:

  • a couple of experienced packet analysts (“packet doctors”) look at pcap files provided by people in the audience
  • the doctors take turns on the projector, explaining what they’re doing and why they’re doing it so that the audience can see different approaches and ways of working with packets
  • ideally, the doctors will solve an unsolved problem in the short amount of time they have for each case
  • between 3 and 6 cases will be handled, depending on complexity and problem situation

In Pittsburgh it worked out pretty well – the doctors (Christian Landström, Hansang Bae, Sake Blok, me) actually solved a problem where a gigabit link could only be used to a fraction of it’s bandwidth. We were able to show that there was an artificial cutoff at 100MBit/s in the packet capture. It turned out to be a bandwidth license problem on a WAN device, and when that license was updated the problem went away.

We did the same thing in Estoril/Portugal that same year, without Hansang because he couldn’t attend, and had similar success (someone told me later we found a problem in 15 Minutes that took him days to find). We have refined the process over the course of the four past “Packet Doctors” sessions with changing casts of doctors, and it was another fifth fun session at Sharkfest EU 2018 in Vienna. That’s especially because we were a lot more organized than before, and we had Kary Rogers as a brilliant host who guided us and the audience through the show (of course I got myself into trouble again):

The packet doctors spotting a problem – believe me, there was an audience sitting just behind the projector 😉

Some ground rules we established:

  • All doctors work on the same problem until it is solved or nothing new can be added to the findings in a reasonable time frame
  • The person submitting the pcap is put on the “hot seat” to talk and answer questions to the doctors while they work on the problem
  • Problems can be already solved or still unsolved on submission, but if it was already solved the pcap must be sufficient to solve it (we had people tell us “uh, yeah, sorry, but you couldn’t really find the problem in the file I gave you” afterwards in Mountain View this year. Duh!)

I sometimes get the question where to find the video of that fabulous session and my answer will surely disappoint you: since we’re looking at real life pcap files from sensitive networks, the whole session is never recorded, and never will be. If you want to see the doctors perform exciting live packet surgery, go get a ticket to Sharkfest next time 😉

Developer Den

The “Developer Den” is a room that is reserved for the Wireshark core developers. But make no mistake – it’s not exclusive to them: you can walk in almost at any time and talk to them about their work on Wireshark, about bugs/problems that you have, enhancements that you would like to see, and everything else. Sometimes they have “closed door” meetings where everybody else gets kicked out (in a friendly way), and that’s fine, of course.

The Sharkfest 2018 EU Developer Den

To give you two examples of what can happen at Sharkfest if you dare to enter the Developer Den:

  1. Johannes Weber (presenting at Sharkfest for the first time) needed a feature in Wireshark that would tell him the response time of an NTP request to be able to see how fast the NTP server answered the query. Something like that already existed for DNS, but not NTP (and it needs to be coded, because it’s a relationship between packets which cannot be filtered on unless there’s a field). He was able to get Sake and Pascal interested in his topic, and so it was implemented by Pascal in no time. The same thing can take weeks/months as a feature request on the bug tracker because there’s so many items there – but if you show up personally, it often gets done much faster.
  2. Landi had a problem with piping output from tshark to sort that he was able to reproduce and show to Pascal (seeing a pattern here? He’s a real problem-solver ;-)). Imagine being a bug that has a bright flashlight pointed at it while a core developer examines it – it got squashed (fixed) so fast, it couldn’t even hope to run away 🙂

Pro Tip: bringing chocolate or other treats can help you win the core developers over in case you want them to do something – I saw a huge box of Belgian chocolates someone had donated in Vienna 😉

The Wireshark TCP Expert

The Wireshark TCP expert is part of the TCP dissection code. This means that the same block of code dissects the TCP header that also tries to determine problems and symptoms, like “Fast Retransmission”, “Window Update”, “Previous segment not captured”, etc. If you ever want to look the code doing that, here it is. I read it every once in a while and it is a beast of a dissector.

During Sharkfest I talked to a couple of people about my own TCP expert I am writing and what kind of problems they had with the Wireshark TCP expert. The biggest problem seemed to be that various versions of Wireshark diagnose different sets of symptoms for the same pcap file, and sometimes symptoms that were correctly diagnosed in earlier versions disappeared or changed in later versions. And some pcaps are not diagnosed correctly in any version of Wireshark, so there’s even more work required fixing that. One of the reasons for that is that the core developers are not very experienced when it comes to TCP analysis (except Sake), so it’s hard for them to build code that does the right thing and doesn’t break working symptoms.

To address the problem of symptoms changing with new versions of Wireshark we (some of the core developers like Graham, Peter and Jaap, and me) decided to implement a TCP symptom regression test which will be run together with the existing Wireshark build tests:

  1. a number of short pcaps with a set of symptoms will be provided by TCP analysis experts (e.g. me, Sake, etc) who manually determine the correct result
  2. each pcap and its desired symptom set will be used to run a test to see if Wireshark produces the correct result
  3. if the result is not okay, the code needs to be fixed until it is
  4. each time Wireshark is built, all tests are repeated to make sure the symptom set stays the same

This way we should be getting to a point where we can improve the TCP expert of Wireshark without constantly breaking it in other places.

Unblocking the pcapng Specifications

The pcapng capture file specifications available here haven’t changed in quite some time and seemed to be deadlocked when Sharkfest EU 2018 started. There were attempts to improve it in the past but there was no clear decision process, so everything always ended up in a limbo state. The core developers approached me in Vienna since I had tried to push the specifications for a couple of years and asked me to help them out getting things moving forward.

pcapng Specifications discussion (Wireshark core developers Jaap, Peter, Anders, me, Stig, left to right)

The main problem is that new block types and other details need to be added to the specifications to allow new features in Wireshark, e.g. the ability to store crypto key information in the file to allow decrypting protocols like WPA which is quite cumbersome right now.

To solve the problem of not being able to make final decisions about how to proceed I volunteered to do exactly that, under the condition that the Wireshark core developers support that by implementing the changes we make. That way we will make sure that Wireshark will be able to read and write the current specifications so that others can see how it works. And let’s face it – if Wireshark doesn’t support a capture file format it’s not very useful. My current plan is to get rid of the experimental blocks first, and designing a procedure to allow adding more block types in a controlled way. It will take a little time to get things rolling, but we’ll finally get something done.

Final words

I have one story that happened to me during Sharkfest a while ago. I was in California, standing outside in a group of people outside during the traditional developer dinner that proceeds each Sharkfest. The people next to me were Gerald Combs (creator of Wireshark), Thomas D’Otreppe (creator of Aircrack-ng), Mike Kershaw (creator of Kismet) and Gordon Lyon (a.k.a. Fyodor, creator of nmap). And I was like “wow, that is sooo cool talking to theses guys!” – well, that’s what Sharkfest is, too. Rule 1 in action.

Oh, right… there are some Sharkfest rules! These are the two I know of right now:

  1. Rule Zero: Do what Janice says. Better keep it in mind or you’ll be in trouble 😉
  2. Rule One: Talk to each other. That includes presenters, core developers, staff, etc. It makes the magic happen, as lined out in the paragraphs above.

In the end, Sharkfest is only possible because of many people donating their time and money to make it happen. That includes all of the sponsors that showed up at one or more of them, presenting their solutions and sponsoring various aspects of the conference. I also need to mention Laura Chappell, of course, who ran her successful Wireshark training class at the beginning of each Sharkfest. Thanks to all of you!

And then there are Janice Spampinato and Sheri Najafi without whom Sharkfest simply would not be possible. I have a little more insight than most into the huge amount of work both of them put into it to make things run as smoothly as possible, and they’re both doing an amazing job. I mean, running not one, not two, but THREE Sharkfest conferences in one year (2018: Asia, US, EU) is really hard to do, and they did it. Another big virtual round of applause, everybody!

Last but not least thanks to all the volunteers helping before, during and after the conference, and to all the attendees who dare to enter the realm of the Sharks! Hope to see you at one of the next ones. Check out the conference pages:

Sharkfest EU (Vienna, 2018): https://sharkfesteurope.wireshark.org/
Sharkfest US (Berkely, 2019): https://sharkfestus.wireshark.org/

Discussions — 7 Responses

  • Rohith April 22, 2020 on 12:49 am

    Hey Jasper,

    This is a query I have after watching one of your session in SharkFest15 titled
    “SharkFest’15 – Jasper Bongertz Class 13” where you talk about Duplicate ACKs and Long Fat Networks at 41:04.

    The question I have is regarding what you say at 43:10 about both peers will keep going at sending and receiving data to one another.

    Aren’t things like Path MTU discovery , TCP slow start, TCP congestion avoidance, zero window etc designed to prevent such a thing from happening?
    Isn’t the whole point of using TCP is to prevent such things from happening??

    Ps: Sorry, didn’t know how else to reach out to you.

    Thanks in advance

    Reply
    • Jasper Rohith April 22, 2020 on 10:25 am

      Hi Rohith,

      you can reach me on Twitter at @packetjay, or via email at jasper[ät]packet-foo.com 🙂

      You are correct, all these technologies are designed to prevent problems, but they were designed for connections with much lower bandwidth and latency. The main problem with Long Fat Networks is that the travel time for packets is so high that it takes a lot of time to notice/notify/react to problems on the other end. If two nodes are a hundred millisecond apart from each other and if the sender can blast packets in nanosecond rate, there is a lot of data traveling down the links before a problem notification from the receiver makes it through to the sender. Neither Slow Start nor Congestion Avoidance or Zero Window will help with that, unfortunately.

      Reply
  • Rohith April 22, 2020 on 10:16 pm

    Hi Jasper,

    Thank you for the confirmation on this. I was wondering the Path MTU would make a difference in this situation from a 10 gig link to a 150 Mb link without remembering the rfc states all 802.3 will default back to a MTU of 1492.

    Thanks Again,
    : )

    Reply
  • Rohith May 2, 2020 on 10:48 am

    Hi Jasper,

    Just had a question on the sliding window function.
    I do know that sliding window is the maximum number of bytes which can be received without an ACK. Is this correct ?
    Also if a receiver with a sliding window between 2-10 will accept packets within this range. What happens if a packet outside this range manages to reach the receiver?
    Is the packets discarded and a DUP ACK is sent or will the packet be kept in the buffer and a SACK is sent???

    Regards,
    Rohith

    Reply
    • Jasper Rohith May 2, 2020 on 11:05 am

      Hi Rohith,

      correct. The Window Size tells how many bytes can be sent without having to wait for an ACK.
      The reaction to an “out of window” packet mostly depends on the stack. There’s two cases: if a packet arrives with a sequence number higher than the high window edge you have packet loss, because there’s an obvious gap. This will result in a “previous segment not captured” diagnosis in Wireshark, and the packet will be stored if the stack uses SACK. I’ve seen stacks store packets without SACK, but that’s not always the case.

      If a packet arrives that is lower than the current window (meaning, “sequence + size in bytes” of the packet is less than the low window edge) you have a retransmission that is unnecessary. It means that the sender sent something again even though it already arrived. If the receiver already acknowledged that packet Wireshark will mark the retransmission as “spurious”, otherwise just “retransmission”.

      Reply
  • Rohith Mohan July 2, 2020 on 8:41 pm

    Hey Jasper,

    I have a question regarding decrypting ISAKMP messages (packet number 5 and 6) using wireshark. Is this possible if I use RSA key exchange instead of DH ( I didn’t know we could not decrypt DH key exchange until I read your great blog “how-to-use-wireshark-to-steal-passwords”)

    I’m not referring to decrypting ESP packets. Have you decrypted Phase 1 packets?

    Also I got a bounce back for your email id: jasper@packet-foo.com. Did I get the id correct?

    Thanks,
    Rohith

    Reply
    • Jasper Rohith Mohan July 4, 2020 on 1:07 pm

      Hi Rohith,

      first of all, yes, jasper@packet-foo.com is correct and should have worked. I have no idea what went wrong, sorry.

      I’m not sure about if RSA would work, I never tried that – you might want to take this question over to https://ask.wireshark.org, where more people with experience/knowledge about decrypting packets can answer this 😉

      Cheers,
      Jasper

      Reply

*