Page MenuHomePhabricator

research: Single Tor-Gateway with Multiple Workstations vs Multiple Tor-Gateways mapped 1:1 to Workstation VMs
Closed, ResolvedPublic

Description

I think the multi-WS advice on its own is pointless for security and while possible its not advised. I would like to remove that section completely and keep the multiple GW stuff instead to discourage dangerous setups.

Also information like :

"An adversary could stress either/and CPU, HDD, RAM, network connection and other Whonix-Workstations and perhaps also the host would suffer."

is factually incorrect since hypervisors can and should partition resources to prevent any single VM from affecting another this way.

https://www.whonix.org/wiki/Dev/Multiple_Whonix-Workstations

Details

Impact
Normal

Event Timeline

Why did you remove the '''Qubes-Whonix vs non-Qubes-Whonix''' section? Qubes-Whonix does a lot better with multi ws behind same gw than Non-Qubes-Whonix. This is because various ws's cannot impersonate each other. They also cannot connect to each other.

An adversary can easily sniff and impersonate communication between another VM on the same internal network

Applies to Non-Qubes-Whonix only.

is factually incorrect since hypervisors can and should partition resources to prevent any single VM from affecting another this way.

They should for sure. But do they? Never worked for me. Specifically I have never found something to tame IO stress. (iotop)

Ephermeral Onion Service keys of another workstation's programs for example.

That sentence has to be improved.

  • Only detached ephemeral onion services.
  • And only if using Whonix 14 and above.
  • And only when white listening the command for listing ephemeral onion services in cpfpy. (Which command even is that?) It's not clear that will be required since there are no ephemeral HS applications in the wild that would require that command.)

For example, an exploit in the browser can not read your IRC identity in another VM. There are no security benefits of running multiple Whonix-Workstation VMs that share the same Gateway.

Contradiction.


Non-Qubes-Whonix one ws connecting to another: Will no longer be possible in Whonix 14.

Non-Qubes-Whonix impersonating: Sadly we still don't have authentication between ws and gw (ARP spoofing defense). The theory has been researched. It's certainly doable. ( https://www.whonix.org/wiki/Connections_between_Whonix-Gateway_and_Whonix-Workstation )


I am not convinced yet. We should keep this somewhere either way. But yes, can be improves, since very different for different platforms.

Why did you remove the '''Qubes-Whonix vs non-Qubes-Whonix''' section? Qubes-Whonix does a lot better with multi ws behind same gw than Non-Qubes-Whonix. This is because various ws's cannot impersonate each other. They also cannot connect to each other.

Do you mean they can share a single GW running for different trust levels with ephemeral services safely?

I thought the last statement in your comment here https://phabricator.whonix.org/T448#10443 implied this wasn't a recommended setup for Whonix, unless you mean non-Qubes.

Please revert all my changes from yesterday and cherry-pick the ones you feel useful.

blkiotune and iotune can restrict io (KVM only)

https://libvirt.org/formatdomain.html#elementsBlockTuning


Non-Qubes-Whonix one ws connecting to another: Will no longer be possible in Whonix 14.

Why?


This is because various [Qubes] ws's cannot impersonate each other. They also cannot connect to each other.

Interesting. How does it do this?

Non-Qubes-Whonix impersonating: Sadly we still don't have authentication between ws and gw (ARP spoofing defense). The theory has been researched. It's certainly doable.

https://libvirt.org/formatnwfilter.html#nwfelemsRulesProtoARP

Why did you remove the '''Qubes-Whonix vs non-Qubes-Whonix''' section? Qubes-Whonix does a lot better with multi ws behind same gw than Non-Qubes-Whonix. This is because various ws's cannot impersonate each other. They also cannot connect to each other.

Do you mean they can share a single GW running for different trust levels with ephemeral services safely?

I thought the last statement in your comment here https://phabricator.whonix.org/T448#10443 implied this wasn't a recommended setup for Whonix, unless you mean non-Qubes.

To specify the following...

In T448#10443, @Patrick wrote:

I would go as far recommending against using the same gateway for hosting hidden services (let alone ephemeral hidden services) for other tasks such as simple browsing.

What I would recommend:

  • use one gw / ws[s] combination for simple client activities (browsing, mail, chat)
  • use another gw /ws[s] combination where add_onion is whitelisted, i.e. where using Tor ephemeral hidden services is white listed

Deanonymization is more at risk when using Tor hidden services compared to just using Tor as client. This is because Tor hidden services can be made to talk by connecting to them. A whole different class of attacks.

So when one just users Tor as a client and that ws gets compromised, it would be a safer if that VM did not have access to Tor creating ephemeral hidden services, since that allows more deanonymization attacks.

blkiotune and iotune can restrict io (KVM only)

https://libvirt.org/formatdomain.html#elementsBlockTuning


Non-Qubes-Whonix one ws connecting to another: Will no longer be possible in Whonix 14.

Why?

Because Whonix-Workstation firewall will be enabled by default.


This is because various [Qubes] ws's cannot impersonate each other. They also cannot connect to each other.

Interesting. How does it do this?

By dynamically creating a separate vif interface per VM that connects to another ProxyVM. In whonix-gw-firewall instead of using eth1 for the internal interface, we just added that being vif+ for Qubes.

Non-Qubes-Whonix impersonating: Sadly we still don't have authentication between ws and gw (ARP spoofing defense). The theory has been researched. It's certainly doable.

https://libvirt.org/formatnwfilter.html#nwfelemsRulesProtoARP

Can you implement it?

In T567#10650, @Patrick wrote:

Worth a separate ticket.

blkiotune and iotune can restrict io (KVM only)

https://libvirt.org/formatdomain.html#elementsBlockTuning

That's nice. To get that back to context.

"An adversary could stress either/and CPU, HDD, RAM, network connection and other Whonix-Workstations and perhaps also the host would suffer." [1]

Sure, if you can implement all restrictions such as blkiotune and iotune and whatnot then that would be awesome. Then also that sentence [1] could be rewritten. And more importantly, that (and the Qubes internal lan spoofing resistance) is great stuff for a comparison table, which can be a colorful tool to document platform differences. To point out other platforms issues and ideally have them some day ironed out.

By dynamically creating a separate vif interface per VM that connects to another ProxyVM. In whonix-gw-firewall instead of using eth1 for the internal interface, we just added that being vif+ for Qubes.

So you are dynamically creating new separate internal networks that are connected to a virtual interface eth0+n?

Or are you enforcing traffic rules that prevent WS ip/traffic spoofing from/to separate GW interfaces while everything shares the same internal network?

Can you implement it?

Yes simple. There is a default filter for clean-traffic that I can just refer to to activate it.

Sure, if you can implement all restrictions such as blkiotune and iotune and whatnot then that would be awesome.

I am not sure about what limits to set. Might have to look some more.

HulaHoop (HulaHoop):

HulaHoop added a comment.

> By dynamically creating a separate vif interface per VM that connects to another ProxyVM. In whonix-gw-firewall instead of using eth1 for the internal interface, we just added that being vif+ for Qubes.

So you are dynamically creating new separate internal networks that are connected to a virtual interface eth0+n?

It's a standard Qubes ProxyVM feature. Creates new separate virtual
internal network like vif11.0, vif15.0, etc.

https://github.com/QubesOS/qubes-core-agent-linux/blob/master/network/vif-route-qubes

Or are you enforcing traffic rules that prevent WS ip/traffic spoofing from/to separate GW interfaces while everything shares the same internal network?

No special firewall rules required. For example vif11.0 cannot
communicate with vif15.0 by default.

It's a standard Qubes ProxyVM feature. Creates new separate virtual internal network like vif11.0, vif15.0, etc.

I see. Is Whonix's ability to apply GW firewall rules (and a new controlport) on the new interface relies on a Qubes feature? - does it depend on virtual interface names like vif11.0?


Offtopic: I summarized reasons for and against using clean-filter

https://www.whonix.org/wiki/Dev/KVM#Apply_Clean-filter_traffic

HulaHoop (HulaHoop):

HulaHoop added a comment.

It's a standard Qubes ProxyVM feature. Creates new separate virtual
internal network like vif11.0, vif15.0, etc.

I see. Is Whonix's ability to apply GW firewall rules (and a new
controlport) on the new interface relies on a Qubes feature? - does
it depend on virtual interface names like vif11.0?

Has nothing to do with Tor's ControlPort.

The anti spoofing feature is indeed a Qubes feature. It is implemented
by vif interfaces. That Qubes vif interface feature relies on Xen features.

Wait I thought the ability for Whonix GW to process traffic from a second eth1 internal interface (eth2?) depended on changes to iptables (not implemented yet). Is that correct?

HulaHoop (HulaHoop):

HulaHoop added a comment.

Wait I thought the ability for Whonix GW to process traffic from a
second eth1 internal interface (eth2?) depended on changes to
iptables (not implemented yet). Is that correct?

In case of Qubes, rather than writing vif... multiple times, iptables
also understands`vif+ which applies to all vif interfaces.

Support for multiple internal (and even external) interfaces are
implemented already in Whonix stable.

for int_if_item in $INT_IF; do
for int_tif_item in $INT_TIF; do
for ext_if_item in $EXT_IF; do

etc.

Example: /etc/whonix_firewall.d/50_user.conf

INT_IF="eth1 eth2 eth3"
INT_TIF="$INT_IF"

I tried testing this by creating a second internal network connected ot the GW. And it does not seem to work though I'm sure with a bit of troubleshooting it will get there.

Some points:

  • Its 50_user no .conf - the ending creates a second file.
  • I am not sure if needed but under eth1 I uncommented INT_IP INT_IF in /etc/network/interfaces.d/30_non-qubes-whonix but with no effect. ifconfig shows eth2 is only assigned an ip6 address. Is this supposed to happen?
  • I added an eth2 entry with a GW subnet section under eth1 in interfaces.d. ifconfig shows an ip4 address as expected but no connection can be made from cloned WS.

HulaHoop (HulaHoop):

HulaHoop added a comment.

I tried testing this by creating a second internal network connected ot the GW. And it does not seem to work though I'm sure with a bit of troubleshooting it will get there.

Some points:

- Its 50_user no .conf - the ending creates a second file.

Certainly must be a mistake. Only configuration files ending with .conf
are sourced since Whonix 12.

Check the xtrace of sudo bash -x whonix_firewall until sourceing is
down, i.e. up to line 44.

Also check the full xtrace and see if the changes settings actually
result in multiple network interfaces being processed. Or check the
firewall rules diff as per:

https://www.whonix.org/wiki/Dev/Firewall_Refactoring

  • I am not sure if needed but under eth1 I uncommented INT_IP INT_IF in /etc/network/interfaces.d/30_non-qubes-whonix but with no effect.

That makes no sense. INT_IP and INT_IF are no valid ifupdown keywords.
That breaks networking.

comment:

  1. comment

commented out command:

#command

ifconfig shows eth2 is only assigned an ip6 address. Is this supposed to happen?

Probably not.

  • I added an eth2 entry with a GW subnet section under eth1 in interfaces.d. ifconfig shows an ip4 address as expected but no connection can be made from cloned WS.

I'd replicate this setup using plain Debian first. Multi ws's that are
connected to one gw using multiple internal network interfaces. The ws's
should not be able to reach clearnet [no ip forwarding in Linux by
default] but you should be able to ping the gateway from the ws since no
Whonix stuff (no firewall) interferes.

The firewall works as expected according to xtrace. So we can conclude its something else. Probably too difficult to find out. Not worth the effort when an extra GW solves the problem and runs with very little resources.

HulaHoop (HulaHoop):

Not worth the effort when an extra GW solves the problem and runs with very little resources.

Extra gw comes with disadvantages.

  • More Tor entry guards used,
  • or more more hassle setting up bridges.
  • Plus another system to keep updated.
  • Eating up more disk space.

(Btw the latter two points are also better solved in Qubes due to root
image sharing so extra gws or extra wss need very much less disk space.)

Any ideas for what else I can try?

I'd replicate this setup using plain Debian first. Multi ws's that are
connected to one gw using multiple internal network interfaces. The ws's
should not be able to reach clearnet [no ip forwarding in Linux by
default] but you should be able to ping the gateway from the ws since no
Whonix stuff (no firewall) interferes.

vailla Debian 8 same result. After reading about similar setups I think there is more needed to get this working:

https://serverfault.com/questions/705919/the-same-ip-on-multiple-interfaces

I thought of something else. Using a single Gateway can link activities of two different Workstations even if they are on separate internal networks because the malicious WS can send NEWNYMs in som pattern which causes the traffic coming from the other clean WS to change exits at will. So while the identity of the user is not unmasked the entire purpose of the setup (multiple unlinked identities) is defeated.

Interesting thought!

  • long lived connections are not affected, they never change the circuit

as long as these are active

  • streams are isolated [more reliably if IP spoofing is prevented]
  • I wonder how non-long lived, new connections would be trackable by

malicious newnym patterns?

  • Let's say for the sake of thinking this trough some application do a

simple fetch check.tpo, close the connection, repeat every minute.

  • What check.tpo would be seeing is not the same exit fetching

check.tpo every minute and then an exit IP change, but an exit IP change
every 2, 4 and 8 minutes or so.

Also I wonder how newnym spam looks to ISP level adversaries?

Not sure which one is better. Mutliple Whonix-Gateways result in
multiple instances of Tor running at the same time. This will certainly
be noticeable for ISP level adversaries. [Leaving hiding Tor aside.] And
one would end up with using multiple Tor entry guards at once.
(Unlikely, that Tor picks the same one.) Multiple Tor entry guards is
something that TPO has eliminated for purposes of not aiding tracking
Tor users who change locations (assigning a pseudonym to them based on
their pretty unique choice of multiple Tor entry guards) as well as
since they decided, multiple Tor entry guards pose a higher risk.

Other than newnym, a compromised workstation could also induce a pattern
by causing a lot traffic followed by causing very little traffic. That
might eat up the connection speed (and latency?) of the Tor connection
from the non-compromised workstation. On the other hand, this could also
happen when using multiple Whonix-Gateway. It's not hard to make one Tor
cause loads of outgoing traffic (thereby influencing another Tor).

I wonder how non-long lived, new connections would be trackable by malicious newnym patterns?

Some first party tracking site "X" sets a cookie. newnym from unrelated WS causes circuit changes for all short-lived circuits. A script running in the browser causes re-fetching data and sends cookie ID from a new exit IP. With Tor's current design of unchanging exit IPs per domain - with many requests it becomes clear that the changes are caused by the adversary's behavior. Especially if the site in question is obscure.

Also I wonder how newnym spam looks to ISP level adversaries?

With an Entry Guard this shouldn't pose a risk. Tor cells are padded to a constant size though the netflows still are not. This will happen soon in the 0.30 release which should defeat DNS deanonymization and website fingeprinting. While the former protects traffic size fingeprinting from network adversaries outside Tor, the latter will protect against if they are running relays.

Not sure which one is better. Mutliple Whonix-Gateways result in multiple instances of Tor running at the same time. This will certainly be noticeable for ISP level adversaries.

Its not that unique of a setup. Think about Debian systems that have TBB and apt-transport-tor installed. Location tracking will take manual effort to mitigate in general because besides clearing the Tor guard data one needs disposable wifi sticks.

Other than newnym, a compromised workstation could also induce a pattern by causing a lot traffic followed by causing very little traffic. That might eat up the connection speed (and latency?) of the Tor connection from the non-compromised workstation. On the other hand, this could also happen when using multiple Whonix-Gateway. It's not hard to make one Tor cause loads of outgoing traffic (thereby influencing another Tor).

This sounds like a variant of the CPU-induced network latency attack and should be mitigated by the queuing fix. Likely holds for both siutations and is dangerous enough to completely deanonymize a rooted WS user rather than just linking activities.


What I wrote is based on my understanding of how Tor works. I think this would make an interesting question for the Tor devs.

"Do you recommend allowing Workstation VMs of different security levels to communicate with the same instance of Tor (even if they do so via separate isolated networks)?"

Yes, that seems useful. Please also add the pros and cons we discussed
above.

Got an answer: https://lists.torproject.org/pipermail/tor-dev/2016-October/011591.html

Seems that a shared Gateway is a very bad idea. There are caveats about separate gateways too but nothing as serious:

  • Links traffic at different guards to the same source IP address

A network adversary can see connections to two different guards.

  • Even VM-level isolation is not proof against some attacks

This true as seen in many advanced attacks involving covert channels and side channels.


The last important thing I learned that we should document for opsec is: gateways cache DNS entries and descriptors of HS websites visited which can give away that they were visited before (because of faster site loading response) even if a single WS is used and is rolled back to a clean snapshot. The solution is to always rollback the gateway to a clean state between different WS sessions.

Could you please add all of these pros and cons to that wiki page? Or
some /Dev page if too irrelevant for users?

Perhaps we just make the recommendation to them and keep the details on
some /Dev page for a shorter page / better usability?

HulaHoop (HulaHoop):

gateways cache DNS entries and descriptors of HS websites visited
which can give away that they were visited before (because of faster
site loading response) even if a single WS is used and is rolled back
to a clean snapshot.

Does this apply even when Tor is recognizing proper stream isolation?

Because the repercussions by this would contradict why stream isolation
was implemented by Tor in the first place? If so, should we create a Tor
bug report, suggesting Tor should only cache DNS / HS descriptors per
stream, but not globally?

The solution is to always rollback the gateway to a clean state

between different WS sessions.

Really bad from usability point of view.

Could you please add all of these pros and cons to that wiki page? Or some /Dev page if too irrelevant for users?

Sure. I will finalize everything once you discuss the consequences.

Perhaps we just make the recommendation to them and keep the details on some /Dev page for a shorter page / better usability?

I think its best if we include it on the user page but below the short and clear recommendation. So users who care to learn more can access the info in one place.

Does this apply even when Tor is recognizing proper stream isolation?

I believe so. He is describing what Tor normally does AFAICT.

Because the repercussions by this would contradict why stream isolation was implemented by Tor in the first place? If so, should we create a Tor bug report, suggesting Tor should only cache DNS / HS descriptors per stream, but not globally?

+1 but you should discuss this on the ML first to see what they have to say.

Really bad from usability point of view.

Its a big PITA for usability but can you see another way?

I would prefer if you post these questions on tor developer ML since you understand the topic better and it is an important thing we need to know.

I would prefer if you post these questions on tor developer ML since you understand the topic better and it is an important thing we need to know.

I am not sure they will be happy with that. A while ago I got a negative feedback when I did that. Perhaps times have changed.

What about creating a ticket instead? The issue with Tor mailing list discussions, that often no tickets for issues found are created, hence the issues never fixed or unnecessarily delayed. A ticket persists, may get a milestone, may get postponed a few times but eventually gets fixed.

HulaHoop (HulaHoop):

HulaHoop added a comment.

Another reply:

https://lists.torproject.org/pipermail/tor-dev/2016-October/011613.html

Most of that is related to Tor ControlPort. We forgot to add in the
original question that access to Tor ControlPort is white listed and
supports only very few commands.

Anyhow. Perhaps a good source to "super blacklist" a few Tor control
commands. I mean, adding them to the control port filter wiki page with
an explanation why these should never be white listed.

draft:

stream isolation for DNS and hidden service descriptor cache


Seems like Tor's DNS cache ({{{CacheIPv4DNS}}}, {{{CacheIPv6DNS}}}) and caching of hidden service descriptors is cached globally.

The first connection in stream one resolves all DNS or hidden service descriptors. But follow up connections in separate streams to the same website do not resolve and use Tor's cache.

So webservers could provide a slightly unique version of their website per visitor. Each visitors browser could be instructed to load additional content from varying hostnames. Due to caching vs non-caching it might be possible to make visitors pseudonymous rather than anonymous.

The problem is that Tor's cache is global and not stream isolated.

What do you think?

Good write-up

Most of that is related to Tor ControlPort. We forgot to add in the original question that access to Tor ControlPort is white listed and supports only very few commands.

True but teor's reply discusses things that do not depend on controlport access.

Anyhow. Perhaps a good source to "super blacklist" a few Tor control commands. I mean, adding them to the control port filter wiki page with an explanation why these should never be white listed.

Maybe but there could be a lot of functionality that can be abused that we can't think of because we don't know how every bit of Tor's internals.

Nice. The Tor guys took notice.

ontopic: While DNS and HS desc caching is one of the things. The original reply implies there are still many other problems - some known and possibly some unknown unknowns. To be absolutely safe we should still recommend for multi-setup with gw snapshots as a catch-all. Inconvenient? yes but better safe than sorry.

  • Caching of DNS, HS descriptors, preemptive circuits, etc.
  • VMs can leak other VM's guards and even entire circuits
    • easily without a control port filter
    • perhaps some discovery attacks even with a filter

HulaHoop (HulaHoop):

HulaHoop added a comment.

Nice. The Tor guys took notice.

ontopic: While DNS and HS desc caching is one of the things. The original reply implies there are still many other problems - some known and possibly some unknown unknowns. To be absolutely safe we should still recommend for multi-setup with gw snapshots as a catch-all. Inconvenient? yes but better safe than sorry.

> - Caching of DNS, HS descriptors, preemptive circuits, etc.
> - VMs can leak other VM's guards and even entire circuits
>   - easily without a control port filter
>   - perhaps some discovery attacks even with a filter

TASK DETAIL

https://phabricator.whonix.org/T567

EMAIL PREFERENCES

https://phabricator.whonix.org/settings/panel/emailpreferences/

To: HulaHoop
Cc: Patrick, WhonixQubes, entr0py, HulaHoop

Not too long ago Tor got some improvements:

  • The number of Tor entry guards was reduced from 3 to 1.
  • And guards are kept for longer. (Longer guard cycling period.)

If we encourage multiple Whonix-Gateways, we subvert these improvements.

a) I think guard related attacks are much more serious since they lead
to deanonymization.

b) Caching issues for in multi-ws single-gw setups can lead to linking
multiple workstations together.

I think a) is more serious than b). So for now I guess multi-ws
single-gw is still better than multi-gw multi-ws.

For multi-gw setups it might be theoretically an option to have them
manually use the same Tor entry guard? That also does not seem too
great, since Tor guard selection then would be manual?


Asked an additional somewhat related question:

https://lists.torproject.org/pipermail/tor-dev/2016-November/011633.html


Perhaps we could equate the following two?

single-gw multi-ws

  • perhaps some discovery attacks even with a filter

multi-gw multi-ws

  • Even VM-level isolation is not proof against some attacks

If these can be equated, then only the caching issues remain. And these
can be considered bugs that can one day fixed in Tor.

For multi-gw setups it might be theoretically an option to have them manually use the same Tor entry guard?

The solution is much simpler than you imagine. A user would simply clone the original GW VM after its started and chosen its guard so they have the same one. Now this must be repeated before the guard time is up which is 3 months I think to make sure the same guard is in use.

Its not airtight because if a guard disappears each VM may choose a different alternative (unless the choice is deterministic) which means that both can be expected to have picked the same one from an ordered list generated at first run. Worth asking about.

One mild disadvantage of such a setup is it would be clear from the network fingerprint that two connections to the same guard is a sign of Whonix use. Now a solution to this is to ask upstream to modify TBB Tor to use a system Tor's guard if present to minimize the risks of multiple guards we know of and this also increases the anonymity set for the Whonix multiGW situation above.

HulaHoop (HulaHoop):

HulaHoop added a comment.

For multi-gw setups it might be theoretically an option to have
them manually use the same Tor entry guard?

The solution is much simpler than you imagine. A user would simply
clone the original GW VM after its started and chosen its guard so
they have the same one.

If anything, it might be better to just copy over /var/lib/tor/state.

Cloning may not be a great idea in context of Non-Qubes-Whonix. Cloning
implies, that the VM has been previously started. That is not great
because then lots of files a generated that should not be shared such as
random seeds.

That is why Whonix is build inside chroot. No services are ever started
before the images are deployed.

For multiple gateways or workstations it is better to import a fresh
image rather than trying to have a master image.

TorBOX 0.1.3 had a critical vulnerability of sharing an already
populated /var/lib/tor folder known to public (because deployed image)
because back then we did create VM images inside VMs by running them and
running commands inside. (As said, for a very long time now, images are
build inside chroot and services never started.)

Installing software / services inside a master image and running them
there is problematic. Imagine sshd. Upon installation, it generates its
keys. Once one done that, one should not use that image as a master image.

Imagine a private key that Tor creates that should not be shared.
Practically, a user is happy with one gateway, sets up a hidden service.
Later discovers multi-gw is a thing now and clones that gateway. That
would contaminate the different trust levels.

Now this must be repeated before the guard
time is up which is 3 months I think to make sure the same guard is
in use.
Its not airtight because if a guard disappears each VM may choose a
different alternative (unless the choice is deterministic) which
means that both can be expected to have picked the same one from an
ordered list generated at first run.

Yes, not a reliable solution. Specifically users cloning the gateway
after almost 3 months.

Worth asking about.

That would be nice. There may be already is a long standing ticket for
this. It's called guard seed. In that discussion a password/pin was
suggested in order to get more stable Tor entry guards.

https://trac.torproject.org/projects/tor/ticket/5236

Once Tails implements persistent Tor state

https://tails.boum.org/blueprint/persistent_Tor_state/

the need for that feature may be lower. Then the only (to Tails and TPO)
known users in need for that feature would be Tails users who do not use
persistence.

One mild disadvantage of such a setup is it would be clear from the
network fingerprint that two connections to the same guard is a sign
of Whonix use.

Now a solution to this is to ask upstream to modify
TBB Tor to use a system Tor's guard if present to minimize the risks
of multiple guards we know of and this also increases the anonymity
set for the Whonix multiGW situation above.

That's unlikely to happen since TPO maintains TBB as a portable
application. System Tor version may not match TBB Tor version. TPO
supports many platforms while Whonix is a derivative of Debian that
always ships latest Tor version.

What is also unlikely but perhaps more likely is that TBB will be
properly packaged for Debian one day. Then indeed it would use system Tor.

https://trac.torproject.org/projects/tor/ticket/3994
https://trac.torproject.org/projects/tor/ticket/5236

I will post the new usage advice on the KVM page because some applies for a simple setup.

As for multi-gw page I think we should post the raw facts from teor's post - which discourages a shared setup and has caveats for regular use. The fact that it impacts usability needs another ticket to track upstream features or Whonix platform changes needed to improve that. Otherwise this ticket can be considered complete.

Before rushing such a major usability decreasing change, I want to make
sure it is really well justified and we are not chasing a ghost.

multi-ws behind single-gw has issues, but I don't think we exhausted all
communication with upstream yet or that they advised against it. We
haven't stressed what is worse.

  • a) single gateway sharing vs
  • b) using multi-gw and thereby subverting number and duration of entry

guard selection

(In that question we must make sure to add that we do use control port
filtering.)

For now we gathered a great deal of new information, but I don't think
they advised the Whonix project to move from single-gw to multi-gw in
all cases.

Also another point that makes me insecure...

context was:

multi-gw multi-ws

teor wrote:

Even VM-level isolation is not proof against some attacks

Let's write a draft for that question.


There are three options for user recommendation:

  1. multi-gw multi-ws
  2. single-gw multi-ws
  3. single-gw with multi-ws while recommending against running different

activities of different trust levels / pseudonyms at the same time


Yet another option might be to run multiple Tor instances on the same
Whonix-Gateway. Tor's systemd service supports that. (Would use separate
Tor data dirs, thereby Tor entry guards.)

Before rushing such a major usability decreasing change, I want to make sure it is really well justified and we are not chasing a ghost.

Sure.

multi-ws behind single-gw has issues, but I don't think we exhausted all communication with upstream yet or that they advised against it. We haven't stressed what is worse.

OK. I'll think of something but its better if you post so he doesn't get impatient.

  1. single-gw multi-ws
  2. single-gw with multi-ws while recommending against running different activities of different trust levels / pseudonyms at the same time

These two can be collapsed into scenario 3. I thought we already advised that on the wiki since it has stronger guarantees fr activity isolation. I think you've written a good short description that we should use in the draft.

Even VM-level isolation is not proof against some attacks

I am certain he means the virtualization covert and sidechannel attacks that were published recently. Stuff we already know about and documented in the Advanced Attacks chapter.

Yet another option might be to run multiple Tor instances on the same Whonix-Gateway. Tor's systemd service supports that. (Would use separate Tor data dirs, thereby Tor entry guards.)

Very cool. I'm thinking if this is yet a different setup that we should also ask about. So it it used in a multi-ws context or single too?

(Without TCP Timestamps network monitoring cannot tell if mutliple Tor connections come from different machines behind NAT or the same one.)

OK. I'll think of something but its better if you post so he doesn't get impatient.

Sure. Can post. But first we need the draft. And before that I am re-reading this whole ticket. And than add everything we learned here, which is a lot, here:

https://www.whonix.org/wiki/Dev/Multiple_Whonix-Workstations

Even VM-level isolation is not proof against some attacks

I am certain he means the virtualization covert and sidechannel attacks that were published recently. Stuff we already know about and documented in the Advanced Attacks chapter.

Possibly. And then there are the old sidechannel attacks. Hypervisors will probably not defeat them anytime soon.

Yet another option might be to run multiple Tor instances on the same Whonix-Gateway. Tor's systemd service supports that. (Would use separate Tor data dirs, thereby Tor entry guards.)

Very cool. I'm thinking if this is yet a different setup that we should also ask about.

Yes.

So it it used in a multi-ws context or single too?

I had multi-ws in mind, but multiple Tor instances per single-ws might also make sense. It's related but solving a different problem.

Good in principle however I want to avoid confusion.

multi-gw multi-ws -> single-g/ws 1:1

single-gw with multi-ws while recommending against running multiple VMs for activities of different trust levels / pseudonyms at the same time

Is this the setup I describe where a single ws is rolledback between different activities? If yes then this should be better described.

IMHO its better not to rehash what was sent before at the beginning of the message but instead simply zip to the three other setups to avoid losing their attention. The concern you have can be best compressed in this sentence:

"Given all the disadvantages of multi-ws sharing a single gw, is it worse than having multiple entry guards into Tor if we make use of separate gateways?"

Also no need to link to the draft page.

Good in principle however I want to avoid confusion.

multi-gw multi-ws -> single-g/ws 1:1

single-gw with multi-ws while recommending against running multiple VMs for activities of different trust levels / pseudonyms at the same time

Is this the setup I describe where a single ws is rolledback between different activities? If yes then this should be better described.

No rollback for ws considered.

IMHO its better not to rehash what was sent before at the beginning of the message but instead simply zip to the three other setups to avoid losing their attention. The concern you have can be best compressed in this sentence:

"Given all the disadvantages of multi-ws sharing a single gw, is it worse than having multiple entry guards into Tor if we make use of separate gateways?"

What about the compression inside the TLDR section? Can you write it please?

Is https://www.whonix.org/w/index.php?title=Dev%2FMultiple_Whonix-Workstations&type=revision&diff=26455&oldid=26415 in progress or done?

(Intermediate edits are very much fine. Just need a notification here once ready. Keep your time.)

Done. Feel free to discuss it further before posting if needed.

I think we should post the long draft. I am not convinced, they are offended by longer mails / complexity. That wiki page shows, that we considered each of their sentences and made an effort to reflect on it before asking more questions.

Their past replies indicate, they didn't assume, that Whonix does have control port filtering. So starting that we do upfront will change the replies we are going to receive.

And if change from multiple-gw multiple-ws mapped 1:1 it will be second-guessed a lot in future. Then it's good to have a reference by Tor developers where all of this was discussed in its full complexity.

Any more draft suggestion?

multi-gw multi-ws -> single-g/ws 1:1

Sorted out. Please check.

Ready to post?

Talked to Roger Dingledine in person of Tor at 33c3 ccc conference. The short summary is, Roger also doesn't know what the advisable trade-off for our [single-gw multiple-ws vs multiple-gw multiple-ws mapped 1:1](https://lists.torproject.org/pipermail/tor-dev/2016-December/011720.html) question is.

Roger wants to connect us to Tariq. The researcher who improved the guard parameters. Tariq possibly help writing a research-wanted blog post that Roger suggested to have posted on blog.torproject.org.

We can use existing texts from wiki and mailing lists. And since the post targets researchers, it can be as verbose as this topic really is.

Roger will reply to our mail next year. And if he doesn't, we should mail him again.

Patrick changed the task status from Open to Review.Feb 13 2017, 6:42 PM

Roger mailed Tariq. Now waiting for Tariq's reply.

Not sure about the ticket status. It's actually "wait for reply".

Patrick renamed this task from Multi GW Documentation to resarch: Single Tor-Gateway with Multiple Workstations vs Multiple Tor-Gateways mapped 1:1 to Workstation VMs.Mar 26 2017, 2:53 PM
Patrick updated the task description. (Show Details)
Patrick added a project: research.
Patrick changed the task status from Review to Open.Mar 26 2017, 3:01 PM

Talked to Tariq.

TODO: write an introduction to Whonix for the research wanted blog post. Then sent that to Tariq who will work through the https://www.whonix.org/wiki/Dev/Multiple_Whonix-Workstations. Deadline 06.04.2017.

My notes...

  • Note, we're addressing people that have never heard of Whonix. Workstation may mean to them the whole computer they are using. Therefore these terms need to be defined.
  • What is Whonix? Whonix is a integration project for VMs, Debian, Tor.
  • Whonix allows any application over Tor without leaks and comes with other enhancements as well.
  • we wonder if we better use one gateway or multiple
  • What are Whonix's aims in terms of anonymity?
  • What are the properties that Whonix hopes to provide?
  • compartmentalization
  • definition gateway / workstation
  • assumption: no Tor ControlPort access
  • assumption: leaving side channel attacks against VMs out (different research and problem solving area)
  • what happens when one vm is malicious how can it affect anonymity of the other vm
  • Note, that everything happens all on the clients local machine, i.e. no gateway on the remote computers.
  • adversary can only see connections to the external network
  • exclude internal lan ip addresses analysis (not worth, easy to detect if running in Qubes, VirtualBox or KVM)
  • assumption: gateway not maliciously altered
  • assumption: Tor cannot be exploited (auditing and hardening Tor is a different research area, not useful to mix it in here)
  • IP discovery ticket is quite good - https://trac.torproject.org/projects/tor/ticket/5928
  • define IP leak
  • mention the workstation is fully torified - a question to ask could be can I find out my ip
  • links to Whonix and Qubes-Whonix so people can easily install and try
Patrick renamed this task from resarch: Single Tor-Gateway with Multiple Workstations vs Multiple Tor-Gateways mapped 1:1 to Workstation VMs to research: Single Tor-Gateway with Multiple Workstations vs Multiple Tor-Gateways mapped 1:1 to Workstation VMs.Mar 26 2017, 5:08 PM
Patrick changed the task status from Open to Review.Apr 25 2017, 6:03 PM

Finally got back to Tariq.

Patrick changed the task status from Review to Open.Mar 7 2018, 1:03 AM
Patrick edited projects, added Whonix 15; removed Whonix 14.

Could you please ping Tariq about the status of this? @HulaHoop

Reply:

The short story is that things get worse very quickly, but there is hope.
The analysis below assumes only the adversary that runs guards and not the local adversary like the host OS or the Whonix processes themselves.
In my analysis I assume a hypothetical adversarial guard bandwidth of 10% of the entire network. This is an arbitrary number since we don't know the real number, but it serves to show the trends as we increase the guards per client and number of clients per user. I do the kind of analysis we do in the Conflux[1] paper which is very relevant here, especially Table 3 and its discussion in section 5.2. I update the numbers and extend that analysis for the scenarios you have described.

  1. 1 guard/client, 1 client/user.

The adversary (i,e, the compromised guard) will have the ability to observe 10% of the clients and hence 10% users. This is the situation today.

  1. 2 guards/client, 1 client/user.

This is worse than 1 above. There is now a 18% probability that only one of the guards is compromised per client and a 1% chance that two guards are compromised per client. The probability of at least one bad guard is hence 19%. There really is not a real distinction between one or two bad guards from the user perspective since in both situations the client will go through a malicious guard in a short period of time, since the guard is picked uniformly at random from the guard set.

  1. 1 guard/client, 2 clients/user.

The observable clients again increase to 19% from the base 10% in 1 above. This means that if the user split her app (or group of apps) across the clients then there is a 19% chance that at least one of the app (groups) is compromised. However, for each client there is still only a 10% chance that a malicious guard is present. Is this configuration better than scenario 2 above? Perhaps, but let's look at the following scenario first.

  1. 2 guards/client, 2 clients/user.

The observable clients increases to 54%. This means that there is a 54% chance that at least one bad guard is present. This is worse than all the other scenarios above. However, if we fix apps (or groups of apps) to particular clients then we can compare to scenario 2 where the app group/client is analogous and the same analysis holds. Then, for each client there is again a 19% chance that there is a malicious guard present. If we compare to 3 above we can see that if we only use 1 guard/client then we can drop the exposure back down to 10% for that client and hence app group.

Taking the above into account we can get good results by keeping the guard set size to 1 and users spin up one client for each app. Then we can achieve at most 10% of apps compromised at *any given time* but not simultaneously. We can call this scenario (which is an extension of scenario 3) the 1 guard/app scenario (1G/A). See the appendix for more tweaks to decrease guard exposure.

If we want to consider 1G/A, then the next question for your user base is that is it better to either 1) have some portion of your apps compromised at *all* times (scenario 1G/A) or 2) have *all* your apps compromised some portion of the time (scenario 1). Tor tends to bend towards option 2, but then they have not considered the option of multi-client usage since it doesn't improve the situation in a non-compartmentalized setting, unlike the Whonix situation. I believe that option 2 is flawed because you never know if you are in fact currently compromised or not. It might be better to go ahead with assuming that you are compromised and mitigating that compromise to some portion of your network activity than all or nothing, which is what option 1 provides.

I hope that answers your questions. Please do not hesitate to get in touch again if you would like to discuss further. I think this is a very interesting problem area and would be happy to contribute to improving the situation.

Best regards,
Tariq Elahi

[1] http://cacr.uwaterloo.ca/techreports/2013/cacr2013-16.pdf

Appendix
We can do better if we allow a user's clients to look at each other's lists to exclude guards that are already picked. The benefit would be that once the bad bandwith has been assigned it can no longer affect subsequent guard selections. However, clients looking at each other's memory space will compromise your vision of process containment. A zero knowledge/oblivious method for comparing guard lists might work to avoid this problem, and indeed the adversarial response will be weak since the best they can do is spread their bad bandwidth over many relays and at best return to the original exposure rate (e.g. 10%) but now with added costs of running many more relays.

Proposed implementations for multi-Tor suggested here:

http://forums.dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion/t/cooperating-multi-tor-implementation-to-avoid-guard-duplication/6126

@Patrick Should I open a separate ticket for which ever implementation you end up choosing or is this ticket sufficient?

The concept was documented for operational use. Auto Guard de-duplication considered too complex to deploy and manual checking is enough.