Page MenuHomePhabricator

CPU-induced latency Covert Channel Countermeasures
Open, NormalPublic

Description

Some very interesting research and practical attacks that build on the "Hot or Not" clock skew paper. The researcher (Ethan White) proposes some effective countermeasures for TAILS and Whonix deployment.

This ticket is for tracking this development.

IMHO this fix should not be optional. We should always deploy safer defaults whenever possible to meet users' expectations that Whonix protects them from advanced attacks.


https://lists.torproject.org/pipermail/tor-talk/2016-July/041908.html


Related:
T12

Details

Impact
Normal

Event Timeline

HulaHoop created this task.Jul 31 2016, 1:51 AM

I tend towards enabling this by default also. Even if it would require more battery.

Is there a fix already?

Does the fix require more battery?

[ I strongly prefer a solution that does not require changing kernel parameters. That would be more flexible with respect to custom configurations. And easier to implement for Qubes. Once the fix boils down to "run these commands" it can be translated into a systemd service that applies this in early boot before networking. ]

This is a good subject for a Whonix blog post to get more attention. And if it would require more battery, we would discuss this beforehand in another blog post also.

In Qubes (or Xen in general), VM cannot disable c-states. This is possible only from dom0. Using this command (at every system startup):

xenpm set-max-cstate 0

There is probably also Xen command line option for this, but as you've pointed out, it's easier to maintain a command running at startup.

HulaHoop added a comment.EditedJul 31 2016, 6:19 PM

No patches posted yet but he plans on submitting them. The fix is well described that it should be possible to work on it. We can discuss our plan with him too.

Does the fix require more battery?

Yes though its much lower compared to using stress.

How low is difficult to know without benchmarking. It may be negligible enough that we may not even need to discuss it.

The c-state fix must be done on the host indeed. [1] (makes sense since power management is sensitive and needs privileged CPU access).

[ I strongly prefer a solution that does not require changing kernel parameters. That would be more flexible with respect to custom configurations. And easier to implement for Qubes. Once the fix boils down to "run these commands" it can be translated into a systemd service that applies this in early boot before networking. ]

+1

The second link [3] in [2] mentions an alternative to using the kernel commandline directly. A python script part of "tuned" suite [4] can do the same thing. It recommends against using it on its own but its possible. Its not packaged for Debian but its worth making one for it if it gives more flexibility (For Whonix KVM this can be part of the security-misc package):

Now in 2011, Red Hatter Jan Vcelak wrote a handy script called pmqos-static.py that can enable this paradigm. No more dependence on kernel cmdline. On-demand, toggle C-states to your application’s desire. Control C-states from your application startup/shutdown scripts. Use cron to dial up the performance before business hours and dial it down after hours. Significant power savings can come from this simple script, when compared to using the cmdline.

No need to use its performance cron options since it needs to always run to provide protection.


[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.0/html/Administration_Guide/KVM_Clock_Appendix.html
[2] https://lists.torproject.org/pipermail/tor-talk/2016-July/041813.html
[3] http://www.breakage.org/2012/11/14/processor-max_cstate-intel_idle-max_cstate-and-devcpu_dma_latency/
[4] https://git.fedorahosted.org/cgit/tuned.git/tree/libexec/pmqos-static.py?h=1.0

Is this also useful for general security or only for anonymity?

Is this also useful for general security or only for anonymity?

Should be useful even for non-anonymity cases. Image an infected VM without internet still being able to communicate with the outside world.

I figure I should weigh in on this.

First off, the Whonix gateway appears to do the right thing in silently dropping any sort of network traffic resulting from anything other than a TCP connection it's initiated. This certainly makes executing the attack more difficult; however, an adversary could, for example, inject data into an already-opened TCP connection (using a trick similar to TCP veto); the adversary could send a packet with an invalid checksum (and a spoofed return address) and look at dupack timings (they can look at the amount of data sent on a Wi-Fi network, even if the content is encrypted with WPA2 or similar).

Another idea I had for mitigation is to queue up all received packets; then, say, once per second, we go through the queued packets and process them (we wouldn't even send TCP ACKs until we process the packets on the second). This would make gathering enough information to create a useful covert channel infeasible (on the order of thousands of years). However, I believe that this would require kernel hacking, something we probably don't want to do. Just an idea.

I'm still somewhat wary of disabling c-states, due to decreased battery life and increased CPU temperature; the latter is quite scary as it could cause hardware damage. (I've seen CPU temperature climb to 72 Celsius, with sensors reporting the high temperature as 80 Celsius; if an ad in your browser decided it needed to run at 100% CPU for a while...)

I would also point out that, to disable c-states in a hypervised environment, we'd need the co-operation of either the hypervisor, or every single guest (I think; correct me if I'm wrong). My understanding is that, in most Whonix configurations, the hypervisor would be beyond Whonix's control; in a lot of configurations (such as Qubes), Whonix likely wouldn't even control the majority of guests (again, correct me if I'm wrong). Is there a way to get around this?

I figure I should weigh in on this.

Great! Thanks for signing up!

First off, the Whonix gateway appears to do the right thing in silently dropping any sort of network traffic resulting from anything other than a TCP connection it's initiated.

Yes.

However, I believe that this would require kernel hacking, something we probably don't want to do. Just an idea.

Certainly outside of my skill set.

Can you make an argument that mentions only security but implicitly also covers anonymity in form of a kernel bug report? I guess a general security issue has more chances as a kernel bug report that get fixed one day than involving anonymity which may be of much less interest.

["similar" case: "But again, we're not exactly on strong footing to land a kernel patch here. We'd have better luck making an ISN prediction argument, I bet."]

I'm still somewhat wary of disabling c-states

I see. For now, there are no other options?

I would also point out that, to disable c-states in a hypervised environment, we'd need the co-operation of either the hypervisor, or every single guest (I think; correct me if I'm wrong). My understanding is that, in most Whonix configurations, the hypervisor would be beyond Whonix's control; in a lot of configurations (such as Qubes), Whonix likely wouldn't even control the majority of guests (again, correct me if I'm wrong). Is there a way to get around this?

In case of Qubes, cooperation is very likely. (Qubes-Whonix is joint effort of Qubes and Whonix. @marmarek is Qubes' lead developer and already replied above in this ticket.) If we find a sensible solution, it might be applied right on the Qubes host. Perhaps even by default. [Ideally such a solution would not add grave disadvantages such as battery or hardware life decrease.] And if it is not appropriate to enable it by default, perhaps there will be a wizard or switch button or whatever. Let's see what kind of fixes get available before we decide on how we ship them.

In case of Non-Qubes-Whonix we would recommend the user to apply these changes on the host in documentation and who knows, perhaps someone will maintain one day a whonix-host-additions package or something like that.

ethanwhite added a comment.EditedAug 3 2016, 5:00 AM

I would also point out that, to disable c-states in a hypervised environment, we'd need the co-operation of either the hypervisor, or every single guest (I think; correct me if I'm wrong). My understanding is that, in most Whonix configurations, the hypervisor would be beyond Whonix's control; in a lot of configurations (such as Qubes), Whonix likely wouldn't even control the majority of guests (again, correct me if I'm wrong). Is there a way to get around this?

As it turns out, simply passing the params processor.max_cstate=1 and idle=poll to the kernel at boot time are sufficient to prevent this attack, provided one spells them correctly (derp), at least in VirtualBox. Further testing would be required for other hypervisors; however, it is definitely possible to disable c-states as a guest operating system (under certain hypervisors), and this appears to effectively mitigate the covert channel in question. In light of this, I think that disabling c-states is likely the right way to go.

I'd expect that PV VM can't control c-states, so on Qubes it will not be
that simple, at least in default configuration. But worth a try.

Anyway, if it's possible to do this for selected VMs only, I understand
it makes sense mostly for Whonix Workstation VMs only.

My reasoning:
Assumption 1: attacker have some control over Whonix Workstation
(like can inject cpu-intensive JS code into a site you visit there).
Assumption 2: attacker have ability to observe clearnet traffic for
selected suspects; can also ping them or do any other active network
measurement from outside of the user system
Assumption 3: attacker can't execute code on all the monitoring users
machines

This means that the attack could be used in one direction only:
influencing latency of clearnet activity by controlling Tor Browser
activity (mostly CPU usage). But can't be done the other way around -
controlling CPU usage outside of Whonix Workstation and measuring
latency inside - trying one user after another.

I'm not sure about Whonix Gateway (VM where Tor process is running), but I
guess the attacker don't have reliable control over CPU usage there, so
attack shouldn't be possible anyway.

Is this correct?

If an attacker compromised a Whonix-Workstation, arbitrary traffic can be sent to Tor and I would not wonder if some combination and/or bug can influence its CPU usage. Even if the workstation is not compromised, lets say some fancy browser feature can issue a lot traffic, that would again influence the gateway's Tor CPU usage. Therefore I guess we on the safer side enabling the protections for Whonix-Gateway also.

Related:
T12

Patrick updated the task description. (Show Details)Aug 3 2016, 4:13 PM
Patrick added a project: Whonix 14.
HulaHoop added a comment.EditedAug 3 2016, 4:41 PM

@ethanwhite

it is definitely possible to disable c-states as a guest operating system

Good to know. IMHO even if thats the case its still preferable to place this mitigation outside the untrusted VM so it stays out of reach of the attacker (so they won't be able to disable it). Also a system-wide solution stops cross-VM cover channel leaks.


I've been looking at virtio-net options that could help on this but I am not sure if any are useful. virtio-net devices are available in all hypervisors which can be useful for a cross platform solution:

https://libvirt.org/formatdomain.html#elementsDriverBackendOptions
https://libvirt.org/formatdomain.html#elementsBackendOptions


I wonder if there are changes that can be made to Tor's own TCP stack to frustrate malformed packets/signals.


EDIT:

Another crazy idea: Running a packet stress tool on the virtual network interface on the gateway. Something like described in the ticket below. This could interfere with signals sent through?

https://fedoraproject.org/wiki/Features/MQ_virtio_net#Multiqueue_virtio-net

HulaHoop added a comment.EditedAug 3 2016, 6:23 PM

Here is someone using tc (traffic control) [1] and netem [2] to delay packets in a queue. It can be applied to all traffic [3]
Another way to delay packets is using the libnetfilter_queue interface. [4]


[1] http://linux.die.net/man/8/tc
[2] https://wiki.linuxfoundation.org/networking/netem
[3] https://serverfault.com/questions/389290/using-tc-to-delay-packets-to-only-a-single-ip-address
[4] https://serverfault.com/questions/701228/delaying-packets-with-libnetfilter-queue


EDIT:

After looking at the netem documentation I'm pretty sure there is something here we can use.

ethanwhite added a comment.EditedAug 4 2016, 9:35 PM

After looking at the netem documentation I'm pretty sure there is something here we can use.

I completely agree. I can't see any disadvantages to using this, and it has numerous advantages over all of the mitigations based on disabling c-states or similar.


I wrote a simple program to implement this that seems to effectively mitigate this attack [1]. It's written using the Python bindings of libnetfilter_queue.

[1] https://gist.github.com/ethan2-0/2c8505049c991fe0aac3d303dddb6075

Here I found an example of someone using libnetfilter_queue to manipulate ICMP packet timing. Though their goal is different - they embed covert patterns while we are preventing them. [1]

libipq can specifically isolate the ICMP queue for later processing by userspace programs:[2]

  1. modprobe iptable_filter
  2. modprobe ip_queue
  3. iptables -A OUTPUT -p icmp -j QUEUE

    will cause any locally generated ICMP packets (e.g. ping output) to be sent to the ip_queue module, which will then attempt to deliver the packets to a userspace application. If no userspace application is waiting, the packets will be dropped

[1] https://www.anfractuosity.com/projects/timeshifter/
[2] http://linux.die.net/man/3/libipq

@ethanwhite

Can you please implement the same protections for IPv6/ICMP6 if its not too much work. We plan to roll out the package for Whonix hosts (to end this attack for other VMs besides Whonix) where some users may have no choice but to connect with IPv6 because of their ISP.

Its also a good thing to make this protection future proof for Whonix when Tor starts supporting IPv6 connections too.

ethanwhite added a comment.EditedAug 6 2016, 5:32 AM

Can you please implement the same protections for IPv6/ICMP6 if its not too much work.

It's a matter of using ip6tables as well as iptables; I've added a shell script to configure them both automatically as well, for ease of use. However, none of the machines I have access to seem to have good IPv6 support, so I wasn't able to test it properly.

As well, it turns out that the iptables rules I was using locally on my testing VM were only matching TCP traffic (derp); the ones in the configuration script I added will match all traffic.

I'm not aware of any other issues. Performance seems to be decent as well; although this obviously increases the average latency, it can easily handle 10mbps of traffic.

EDIT: After thinking about this some more, I'm realizing it may be a better fit for Tails or even an option in TBB than in Whonix. If a process increases the Whonix guest's CPU usage in the host, then it will decrease the proportion of time the CPU is in a c-state, and an adversary would still be able to perform the attack using another guest or even the host.

As a result, I think that, unless we're sure that every packet that's processed will at some point end up in this filter (which could be possible if we run the script on the host and use NAT for connectivity in VirtualBox), we probably want to go the c-state route.

I'm not aware of any other issues. Performance seems to be decent as well; although this obviously increases the average latency, it can easily handle 10mbps of traffic.

Excellent

EDIT: After thinking about this some more, I'm realizing it may be a better fit for Tails or even an option in TBB than in Whonix. If a process increases the Whonix guest's CPU usage in the host, then it will decrease the proportion of time the CPU is in a c-state, and an adversary would still be able to perform the attack using another guest or even the host.

Because of Whonix's architecture, any solution will need to be deployed on the host-side too (no way around it). For TAILS its a simpler step because they are the host OS. IMHO the queuing solution is most elegant for the reasons brought up before: it doesn't murder battery life or the hardware.

As for Tor it would be great if you can convince the TPO devs to add a package that can be installed on Linux hosts at least from their repo.

As more advanced attacks become easier and more realistic, the protection gap between those running vanilla TBB and those running specialized Torrification systems will widen.


We would like your feedback on the TCP ISN attack/mitigation info (or on the covert channel attack in general) on the wiki page. Some of the information on there is wrong because its a complex topic that we didn't understand well at the time.

https://www.whonix.org/wiki/Time_Attacks

Can you please implement the same protections for IPv6/ICMP6 if its not too much work.

It's a matter of using ip6tables as well as iptables; I've added a shell script to configure them both automatically as well, for ease of use. However, none of the machines I have access to seem to have good IPv6 support, so I wasn't able to test it properly.

You can upgrade to IPv6 using an 4to6 tunnel broker. Some might provide a free IPv6 address (which gets actually tunneled over your IPv4). I haven't looked into that for a while but in past that worked well.

https://en.wikipedia.org/wiki/List_of_IPv6_tunnel_brokers

Would that be an option?

As a result, I think that, unless we're sure that every packet that's processed will at some point end up in this filter (which could be possible if we run the script on the host and use NAT for connectivity in VirtualBox), we probably want to go the c-state route.

For Qubes / Qubes-Whonix that may not be a big issue. The solution would simply be applied in Qubes sys-net VM so all packages would be processed. I might create a package that sets up that configuration which gets installed inside Qubes sys-net VM. (Or alternatively we could patch the qubes-core-agent-linux package if @marmarek prefers that way, perhaps also depending on the implementation details.)

For Non-Qubes-Whonix / Debian, ideally there would be general Debian purpose package indeed that is supposed to be installed on any host. Perhaps a general security argument can be made to convince Debian to apply this fix by default?

ethanwhite added a comment.EditedAug 8 2016, 10:17 PM

We would like your feedback on the TCP ISN attack/mitigation info (or on the covert channel attack in general) on the wiki page.

I don't immediately see any errors; however, I'm relatively new to covert channels and the likes, so I could definitely be missing something.

You can upgrade to IPv6 using an 4to6 tunnel broker.

I don't think this is necessary. ip6tables is no less functional than iptables (I think; again, I don't have a very good testing setup for IPv6). However, if this does prove to be necessary, it definitely seems like it would work.

For Qubes / Qubes-Whonix that may not be a big issue. The solution would simply be applied in Qubes sys-net VM so all packages would be processed. I might create a package that sets up that configuration which gets installed inside Qubes sys-net VM. (Or alternatively we could patch the qubes-core-agent-linux package if @marmarek prefers that way, perhaps also depending on the implementation details.)

That seems like a good idea.

For Non-Qubes-Whonix / Debian, ideally there would be general Debian purpose package indeed that is supposed to be installed on any host.

My worry is that this introduces more ways for users to shoot themselves in the foot. If configured improperly, this would provide a false sense of security (more dangerous than no security at all).

As a case study, I was thought the reason that ICMP traffic wasn't being sent through my libnetfilter_queue handler was because I'd configured iptables improperly. However, it was actually as a result of some configuration changes I'd made about a year ago that I'd since forgotten about.

Perhaps a general security argument can be made to convince Debian to apply this fix by default?

This seems unlikely; for the mitigation to be effective at all, we need to queue packets up over an interval of 50ms or more; I chose 150ms to be safe. That means that, starting from nothing, an HTTP request would take around 675ms (as low as 600ms or as high as 750ms, depending on when the initial TCP SYN falls).

HulaHoop added a comment.EditedAug 9 2016, 12:11 AM

@ethanwhite

Would it be correct to say that the fix developed also defends against the earlier attack described by Steven Murdoch? - Therefore closing up this entire class of threats.


However, it was actually as a result of some configuration changes I'd made about a year ago that I'd since forgotten about.

Are there any general caveats that users should look out for in configurations to avoid this? Is it iptables rule ordering or network stack options they should pay attention to?

Would it be correct to say that the fix developed also defends against the earlier attack described by Steven Murdoch?

I don't think so. The clock skews Murdoch observed were on the order of 20 milliseconds, much beyond anything the fix in question could hope to mitigate (and besides, Murdoch's attack works on TCP timestamps, which just queueing packets up before they're sent out to the network wouldn't affect).

Are there any general caveats that users should look out for in configurations to avoid this?

I had a rule to automatically accept all ICMP packets before my rule to send all packets off to my queue. As long as you don't do that, you should be fine.


I also changed my fix to randomize the amount of time that packets are queued up across (it's different every iteration). That should make averaging attacks even harder. (Note that my randomness source is currently Python's SystemRandom, which is based off of /dev/urandom. If that turns out to be a performance issue, then I can change it to a simple userspace CSPRNG, such as a hash function in OFB mode).

HulaHoop renamed this task from Covert Channel Data Leaks and Countermeasures to CPU-induced latency Covert Channel Countermeasures.Aug 18 2016, 5:34 AM

Could you please post (and license Open Source) your fix to github? @ethanwhite

In T530#9956, @Patrick wrote:

Could you please post (and license Open Source) your fix to github? @ethanwhite

My fix is available here. Its license comprises the SQLLite public-domain dedication, as well as the MIT license warranty and liability disclaimer.

@ethanwhite

Thanks for researching this and contributing a fix.


Murdoch's attack works on TCP timestamps

I see. We (and Tails too I believe) disable TCP Timestamps. ICMP traffic can be blocked entirely with iptables. The only thing we can't do anything about are TCP ISNs.

I've been trying to figure out how practical TCP ISNs are for an attacker in a real world situation. Here is a vague quote from the paper which I can't understand. In your opinion what do you make of it? Have you discussed it with Steven?

Another option with Linux is to use TCP sequence numbers, which are the sum of a cryptographic result and a 1 MHz clock. Over short periods, the high h gives good results, but as the cryptographic function is re-keyed every 5 minutes, maintaining long term clock skew figures is non-trivial.

The following is an issue for us. (Since upgrades come outside of apt-get which makes it hard to keep it up to date for users as linux distribution maintainer. Package manager security and whatnot.)

pip3 install NetfilterQueue

Could it be replaced with the Debian package python-nfqueue? Is it the same?

Could it be replaced with the Debian package python-nfqueue? Is it the same?

The Debian package you mentioned is actually a completely different library serving the same purpose. I'll probably end up porting my code over to use that (perhaps optionally?), but that'll take a couple days (I've found myself suddenly quite busy).

I've been trying to figure out how practical TCP ISNs are for an attacker in a real world situation.

If the attacker's goal is to judge clock skew (which can get to be tens of milliseconds), then it's completely practical; they need only average over about a dozen samples. If their goal is to observe 80us delays due to waking up the processor from a c-state, then they could probably pull it off, although they'd need to average over hundreds or even thousands of samples.

For reference, I believe that Linux generates TCP ISNs as the arithmetic sum of the current time in microseconds and an MD4 hash of some secret data, with the first byte replaced with the current uptime in seconds divided by 300. For almost all purposes, the result of an MD4 hash of secret data can be treated simply as a random number; this is why the averaging method works. Feel free to correct me if I'm wrong.

If the attacker's goal is to judge clock skew (which can get to be tens of milliseconds), then it's completely practical

I see.

In a newer presentation Steven hints that ISNs can be rewritten [1]. Going back to the paper [2] from 22C3 he talks about the different ways (and difficulty) of rewriting a packet's ISN field with arbitrary data while not standing out on the network hence the steganographic uses. This gives me an alternative idea to burning up the CPU.

What if we rewrite the ISNs to prevent leaking skew information?

The three different softwares in the paper:

  • covert_tcp [3] - primitive, very low throughput - not useful for decent connections, easily detectable practically giving away that a user is running Whonix. Code available.
  • Nushu - written by Joanna, goes too far in making the ISN look random while in reality Linux is not that good. stands out. No public code.
  • Lathra - the proposed solution by Steven in the paper that supposedly achieves perfect indistinguishability. No known code. Potentially the best solution if we can get the code.

PS. @ethanwhite I hope you don't mind that we tagged you in a bunch of tasks that you haven't signed up for. We appreciate what you have done for us. Truth is, we would really like if you stick around and help us with more stuff (if you're interested of course). The benefits go beyond our distro and helps other privacy/security OSs too.


[1] http://sec.cs.ucl.ac.uk/users/smurdoch/talks/eurobsdcon07hotornot.pdf - slide 9

TCP sequence numbers

Works for Linux, 1 MHz, generated
from system clock, rewriting needs
state on firewalls, (more in my
22C3 talk)

[2] http://sec.cs.ucl.ac.uk/users/smurdoch/papers/ih05coverttcp.pdf

[3] http://firstmonday.org/ojs/index.php/fm/article/view/528/449

The Debian package you mentioned is actually a completely different library serving the same purpose. I'll probably end up porting my code over to use that

As it turns out, that other library chokes whenever the packet handler releases the GIL (which is the only way to get the packet skewing we want). We can't use the Debian package python-nfqueue.

That really leaves us with two options:

  • I could rewrite the handler entirely in C, in which case all we need is Debian's libnetfilter-queue package. However, I generally consider writing security-critical code in C to be a bad idea, especially when threads are involved like they are here.
  • I could create a Debian package including both my Python handler that depends on the pip NetfilterQueue package, as well as the NetfilterQueue package itself. That could then be used in whatever process Whonix regularly uses to include packages at install time.

I could do either.


What if we rewrite the ISNs to prevent leaking skew information?

First off, this would likely better be discussed directly on T543, as it's largely unrelated to ping latency covert channels.

I think that rewriting TCP ISNs is something we definitely want to do. The precise formula for this could be debated; but I'd suggest E(t + H(localhost | localport | remotehost | remoteport)), where E is a 32-bit block cipher under a secret key, and H(x) is the first 32 bits of the digest of x using a MAC with a secret key (I'd suggest HMAC-SHA256 or similar). A choice of E is arguable; I'd suggest a balanced Feistel network based on RC2, or RC5 (has the patent on that expired yet?).

As for TCP timestamps, I'd suggest just decreasing the resolution to 1Hz (it's currently around 1KHz, at least in Ubuntu), possibly adding a bit of red herring skew that should overpower that from the heat.

First off, this would likely better be discussed directly on T543, as it's largely unrelated to ping latency covert channels.

You're right :) I'll answer you there.

The Debian package you mentioned is actually a completely different library serving the same purpose. I'll probably end up porting my code over to use that

As it turns out, that other library chokes whenever the packet handler releases the GIL (which is the only way to get the packet skewing we want). We can't use the Debian package python-nfqueue.

That really leaves us with two options:

  • I could rewrite the handler entirely in C, in which case all we need is Debian's libnetfilter-queue package. However, I generally consider writing security-critical code in C to be a bad idea, especially when threads are involved like they are here.

I agree, C is best avoided.

  • I could create a Debian package including both my Python handler that depends on the pip NetfilterQueue package, as well as the NetfilterQueue package itself. That could then be used in whatever process Whonix regularly uses to include packages at install time.

https://github.com/kti/python-netfilterqueue is a small project, looks like. Unfortunately not yet in Debian apparently indeed. (feature request)

So yes, while not an ideal solution, embedding this code kti/python-netfilterqueue in your package would be a tremendous help!

I've created some bash scripts to create a Debian package for kti/python-netfilterqueue. They're available in this GitHub repository, and I've uploaded a version of the package created on my Debian Jessie system here. There are still a few issues I'll be resolving in the coming days, including the lack of a source package, but it's overall completely functional.

I'm thinking that, from an architecture standpoint, we probably want to have one package for kti/python-netfilterqueue, and another one for my NetfilterQueue handler, rather than merge them both into the same package. This would be good if we end up with more than one NetfilterQueue handler (which seems likely; see, for example, T543). I'll also be creating a Debian package for my NetfilterQueue handler in the coming days.

I'm thinking that, from an architecture standpoint, we probably want to have one package for kti/python-netfilterqueue, and another one for my NetfilterQueue handler, rather than merge them both into the same package. This would be good if we end up with more than one NetfilterQueue handler (which seems likely; see, for example, T543). I'll also be creating a Debian package for my NetfilterQueue handler in the coming days.

Yes, that seems a lot better than all in one.

I've now added Debian packaging support to the actual filter. Both packages install correctly and work well.

I've put the filter up on a proper GitHub repository here (instead of a Gist). I've also put up a version of the package I built on Debian Jessie here.

Looks like I overlooked python3-netfilterqueue-packager.

There are a few error during package build, cp and du is failing.

bash -x build.sh 
+ source build.conf
++ archive_sha256=f24c592a0d2e8b2233ee365528fc1f90f7e3d80cb35c09195e3aafe3d451eac5
++ archive_url=https://pypi.python.org/packages/7b/c3/204d47c1c47a7fd6ac1e4e341bdc6021f8142e6c7b6e488436592a6d2488/NetfilterQueue-0.7.tar.gz
++ package_version=0.7
+ cd src
+ ./build.sh https://pypi.python.org/packages/7b/c3/204d47c1c47a7fd6ac1e4e341bdc6021f8142e6c7b6e488436592a6d2488/NetfilterQueue-0.7.tar.gz f24c592a0d2e8b2233ee365528fc1f90f7e3d80cb35c09195e3aafe3d451eac5
[Build] Downloading archive, verifying sha256, and extracting...
--2016-10-11 22:21:07--  https://pypi.python.org/packages/7b/c3/204d47c1c47a7fd6ac1e4e341bdc6021f8142e6c7b6e488436592a6d2488/NetfilterQueue-0.7.tar.gz
Resolving pypi.python.org (pypi.python.org)... 151.101.12.223, 2a04:4e42:3::223
Connecting to pypi.python.org (pypi.python.org)|151.101.12.223|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 55215 (54K) [application/octet-stream]
Saving to: ‘NetfilterQueue-0.7.tar.gz’

NetfilterQueue-0.7.tar.gz       100%[=======================================================>]  53.92K   277KB/s   in 0.2s   

2016-10-11 22:21:08 (277 KB/s) - ‘NetfilterQueue-0.7.tar.gz’ saved [55215/55215]

[Build] SHA256 matches. (= f24c592a0d2e8b2233ee365528fc1f90f7e3d80cb35c09195e3aafe3d451eac5)
[Build] Creating virtualenv...
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in ./bin/python3
Also creating executable in ./bin/python
Installing setuptools, pip...done.
[Build] Installing netfilterqueue in virtualenv...
running install
running build
running build_ext
building 'netfilterqueue' extension
creating build/temp.linux-x86_64-3.4
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.4m -I/home/user/python3-netfilterqueue-packager/src/include/python3.4m -c netfilterqueue.c -o build/temp.linux-x86_64-3.4/netfilterqueue.o
netfilterqueue.c: In function ‘__pyx_f_14netfilterqueue_6Packet_set_nfq_data’:
netfilterqueue.c:1939:67: warning: passing argument 2 of ‘nfq_get_payload’ from incompatible pointer type
   __pyx_v_self->payload_len = nfq_get_payload(__pyx_v_self->_nfa, (&__pyx_v_self->payload));
                                                                   ^
In file included from netfilterqueue.c:271:0:
/usr/include/libnetfilter_queue/libnetfilter_queue.h:119:12: note: expected ‘unsigned char **’ but argument is of type ‘char **’
 extern int nfq_get_payload(struct nfq_data *nfad, unsigned char **data);
            ^
netfilterqueue.c: In function ‘__pyx_pf_14netfilterqueue_6Packet_4get_payload’:
netfilterqueue.c:2279:5: warning: implicit declaration of function ‘PyString_FromStringAndSize’ [-Wimplicit-function-declaration]
     __pyx_t_2 = PyString_FromStringAndSize(__pyx_v_self->payload, __pyx_v_self->payload_len); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 107, __pyx_L1_error)
     ^
netfilterqueue.c:2279:15: warning: assignment makes pointer from integer without a cast
     __pyx_t_2 = PyString_FromStringAndSize(__pyx_v_self->payload, __pyx_v_self->payload_len); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 107, __pyx_L1_error)
               ^
creating build/lib.linux-x86_64-3.4
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.4/netfilterqueue.o -lnetfilter_queue -o build/lib.linux-x86_64-3.4/netfilterqueue.cpython-34m.so
running install_lib
copying build/lib.linux-x86_64-3.4/netfilterqueue.cpython-34m.so -> /home/user/python3-netfilterqueue-packager/src/lib/python3.4/site-packages
running install_egg_info
Writing /home/user/python3-netfilterqueue-packager/src/lib/python3.4/site-packages/NetfilterQueue-0.7.egg-info
[Build] Copying binaries to package directory...
cp: cannot create regular file ‘/home/user/python3-netfilterqueue-packager/src/../package/usr/lib/python3/dist-packages’: No such file or directory
cp: cannot create regular file ‘/home/user/python3-netfilterqueue-packager/src/../package/usr/lib/python3/dist-packages’: No such file or directory
+ cd ..
+ cd package
+ find DEBIAN
+ grep -v DEBIAN
+ xargs md5sum
+ cd ..
++ du --block-size 1K package/usr/lib/python3/dist-packages/NetfilterQueue-0.7.egg-info
++ cut -f1
du: cannot access ‘package/usr/lib/python3/dist-packages/NetfilterQueue-0.7.egg-info’: No such file or directory
+ egg_info_du=
++ du --block-size 1K 'package/usr/lib/python3/dist-packages/netfilterqueue.cpython*'
++ cut -f1
du: cannot access ‘package/usr/lib/python3/dist-packages/netfilterqueue.cpython*’: No such file or directory
+ so_du=
build.sh: line 39: +  : syntax error: operand expected (error token is "+  ")
+ echo 'Package installed size:  kilobytes'
Package installed size:  kilobytes
++ python3 -c 'import sys; print(sys.version_info.minor)'
+ py3_version=4
+ py3_nextversion=5
+ echo 'Python 3 version: 3.4, next version: 3.5'
Python 3 version: 3.4, next version: 3.5
++ dpkg --print-architecture
+ architecture=amd64
+ cat
+ dpkg -b package/ python3-netfilterqueue_0.7_amd64.deb
dpkg-deb: building package `python3-netfilterqueue' in `python3-netfilterqueue_0.7_amd64.deb'.

Another LAN/Public wifi fingerprinting attack that Ethan's code can defeat:

http://www2.ece.gatech.edu/cap/papers/1569740227-3.pdf

Patrick edited projects, added Whonix 15; removed Whonix 14.Apr 13 2017, 11:11 AM