Fri, Dec 7
Mon, Dec 3
I think hiding the clock is a bad idea as a user may want to manually run sdwdate to adjust it if it's out of whack before initiating internet traffic. (This is on non-Qubes versions lacking auto time adjust)
Tue, Nov 20
Oct 13 2018
Proposed implementations for multi-Tor suggested here:
The short story is that things get worse very quickly, but there is hope.
The analysis below assumes only the adversary that runs guards and not the local adversary like the host OS or the Whonix processes themselves.
In my analysis I assume a hypothetical adversarial guard bandwidth of 10% of the entire network. This is an arbitrary number since we don't know the real number, but it serves to show the trends as we increase the guards per client and number of clients per user. I do the kind of analysis we do in the Conflux paper which is very relevant here, especially Table 3 and its discussion in section 5.2. I update the numbers and extend that analysis for the scenarios you have described.
- 1 guard/client, 1 client/user. The adversary (i,e, the compromised guard) will have the ability to observe 10% of the clients and hence 10% users. This is the situation today.
- 2 guards/client, 1 client/user. This is worse than 1 above. There is now a 18% probability that only one of the guards is compromised per client and a 1% chance that two guards are compromised per client. The probability of at least one bad guard is hence 19%. There really is not a real distinction between one or two bad guards from the user perspective since in both situations the client will go through a malicious guard in a short period of time, since the guard is picked uniformly at random from the guard set.
- 1 guard/client, 2 clients/user. The observable clients again increase to 19% from the base 10% in 1 above. This means that if the user split her app (or group of apps) across the clients then there is a 19% chance that at least one of the app (groups) is compromised. However, for each client there is still only a 10% chance that a malicious guard is present. Is this configuration better than scenario 2 above? Perhaps, but let's look at the following scenario first.
- 2 guards/client, 2 clients/user. The observable clients increases to 54%. This means that there is a 54% chance that at least one bad guard is present. This is worse than all the other scenarios above. However, if we fix apps (or groups of apps) to particular clients then we can compare to scenario 2 where the app group/client is analogous and the same analysis holds. Then, for each client there is again a 19% chance that there is a malicious guard present. If we compare to 3 above we can see that if we only use 1 guard/client then we can drop the exposure back down to 10% for that client and hence app group.
Taking the above into account we can get good results by keeping the guard set size to 1 and users spin up one client for each app. Then we can achieve at most 10% of apps compromised at *any given time* but not simultaneously. We can call this scenario (which is an extension of scenario 3) the 1 guard/app scenario (1G/A). See the appendix for more tweaks to decrease guard exposure.
If we want to consider 1G/A, then the next question for your user base is that is it better to either 1) have some portion of your apps compromised at *all* times (scenario 1G/A) or 2) have *all* your apps compromised some portion of the time (scenario 1). Tor tends to bend towards option 2, but then they have not considered the option of multi-client usage since it doesn't improve the situation in a non-compartmentalized setting, unlike the Whonix situation. I believe that option 2 is flawed because you never know if you are in fact currently compromised or not. It might be better to go ahead with assuming that you are compromised and mitigating that compromise to some portion of your network activity than all or nothing, which is what option 1 provides.
I hope that answers your questions. Please do not hesitate to get in touch again if you would like to discuss further. I think this is a very interesting problem area and would be happy to contribute to improving the situation.
Best regards, Tariq Elahi
Appendix We can do better if we allow a user's clients to look at each other's lists to exclude guards that are already picked. The benefit would be that once the bad bandwith has been assigned it can no longer affect subsequent guard selections. However, clients looking at each other's memory space will compromise your vision of process containment. A zero knowledge/oblivious method for comparing guard lists might work to avoid this problem, and indeed the adversarial response will be weak since the best they can do is spread their bad bandwidth over many relays and at best return to the original exposure rate (e.g. 10%) but now with added costs of running many more relays.
Sep 20 2018
Sep 18 2018
Actually, the "apt-daily.timer: Adding 1h 17min 24.927437s random time" message have real impact, not only noise. Each time sdwdate change time, systemd adds a random delay to those timers. which means the timer will never expire (unless that random delay will happen to be very close to 0 - i.e. below the time until sdwdate change the time, which looks to be 1s).
Aug 16 2018
Non-Debian dependencies and non materialization of TUF PyPi makes a secure way to obtain this package impossible.
Aug 9 2018
Aug 8 2018
Aug 7 2018
Jul 25 2018
This is sorted in a later version of systemd.
Jul 24 2018
There are up to date Whonix 14 testers versions available.
Jul 22 2018
@ng0 I wrote a proposal draft. Feel free to improve it before I post:
Jul 19 2018
Just this bit already:
Acknowledged and I'm lagging behind. You can expect an answer somewhere between mid August and beginning of October.
Jul 14 2018
We have now a DNS Certification Authority Authorization (CAA) Policy.
Jul 9 2018
Same as T84#14765.
Jul 7 2018
Jun 29 2018
Check these alternatives out:
Jun 25 2018
Jun 24 2018
Jun 23 2018
Jun 20 2018
nftables transition info:
Jun 18 2018
May 18 2018
If you would have read the chat content (which I assume you didn't), you would see some insight into the problems and what possible solutions there are.
May 16 2018
How are we doing on RAM use? Is it any more or less efficient than socat after you cut down the number of spawned instances?
Project looks dead no recent releases.