- User Since
- Nov 21 2014, 10:16 PM (211 w, 2 d)
Wed, Dec 5
My advice is to use a private address range reserved for this purpose by IANA. These will never be used in the future by anyone. Sine we use 10.x.x.x and moved away from 192.x.x.x, this leaves 172.x.x.x
Mon, Dec 3
There's been research showing that trying to hide CPU information in a virtualizer is futile.
I think hiding the clock is a bad idea as a user may want to manually run sdwdate to adjust it if it's out of whack before initiating internet traffic. (This is on non-Qubes versions lacking auto time adjust)
Oct 28 2018
I disagree. Firetools makes administration easier and has a place on both VMs.
Oct 13 2018
We can now grab the browser tarball from the TPO onion instead which makes this ticket obsolete. Close if you concur.
Proposed implementations for multi-Tor suggested here:
The short story is that things get worse very quickly, but there is hope.
The analysis below assumes only the adversary that runs guards and not the local adversary like the host OS or the Whonix processes themselves.
In my analysis I assume a hypothetical adversarial guard bandwidth of 10% of the entire network. This is an arbitrary number since we don't know the real number, but it serves to show the trends as we increase the guards per client and number of clients per user. I do the kind of analysis we do in the Conflux paper which is very relevant here, especially Table 3 and its discussion in section 5.2. I update the numbers and extend that analysis for the scenarios you have described.
- 1 guard/client, 1 client/user. The adversary (i,e, the compromised guard) will have the ability to observe 10% of the clients and hence 10% users. This is the situation today.
- 2 guards/client, 1 client/user. This is worse than 1 above. There is now a 18% probability that only one of the guards is compromised per client and a 1% chance that two guards are compromised per client. The probability of at least one bad guard is hence 19%. There really is not a real distinction between one or two bad guards from the user perspective since in both situations the client will go through a malicious guard in a short period of time, since the guard is picked uniformly at random from the guard set.
- 1 guard/client, 2 clients/user. The observable clients again increase to 19% from the base 10% in 1 above. This means that if the user split her app (or group of apps) across the clients then there is a 19% chance that at least one of the app (groups) is compromised. However, for each client there is still only a 10% chance that a malicious guard is present. Is this configuration better than scenario 2 above? Perhaps, but let's look at the following scenario first.
- 2 guards/client, 2 clients/user. The observable clients increases to 54%. This means that there is a 54% chance that at least one bad guard is present. This is worse than all the other scenarios above. However, if we fix apps (or groups of apps) to particular clients then we can compare to scenario 2 where the app group/client is analogous and the same analysis holds. Then, for each client there is again a 19% chance that there is a malicious guard present. If we compare to 3 above we can see that if we only use 1 guard/client then we can drop the exposure back down to 10% for that client and hence app group.
Taking the above into account we can get good results by keeping the guard set size to 1 and users spin up one client for each app. Then we can achieve at most 10% of apps compromised at *any given time* but not simultaneously. We can call this scenario (which is an extension of scenario 3) the 1 guard/app scenario (1G/A). See the appendix for more tweaks to decrease guard exposure.
If we want to consider 1G/A, then the next question for your user base is that is it better to either 1) have some portion of your apps compromised at *all* times (scenario 1G/A) or 2) have *all* your apps compromised some portion of the time (scenario 1). Tor tends to bend towards option 2, but then they have not considered the option of multi-client usage since it doesn't improve the situation in a non-compartmentalized setting, unlike the Whonix situation. I believe that option 2 is flawed because you never know if you are in fact currently compromised or not. It might be better to go ahead with assuming that you are compromised and mitigating that compromise to some portion of your network activity than all or nothing, which is what option 1 provides.
I hope that answers your questions. Please do not hesitate to get in touch again if you would like to discuss further. I think this is a very interesting problem area and would be happy to contribute to improving the situation.
Best regards, Tariq Elahi
Appendix We can do better if we allow a user's clients to look at each other's lists to exclude guards that are already picked. The benefit would be that once the bad bandwith has been assigned it can no longer affect subsequent guard selections. However, clients looking at each other's memory space will compromise your vision of process containment. A zero knowledge/oblivious method for comparing guard lists might work to avoid this problem, and indeed the adversarial response will be weak since the best they can do is spread their bad bandwidth over many relays and at best return to the original exposure rate (e.g. 10%) but now with added costs of running many more relays.
Sorry not reproducible on my end. May be related to the fact that you are running a non-standard setup with custom compiled binaries. By running packages from your distro there is a higher chance that bugs are more visible for more people and more likely to be fixed.
Oct 12 2018
Closing. duplicate of:
There is nothing dead about it. I jsut explained this on the forum. It is perfectly workable and openprivacy is owrking on creating a P2P asynchronous chat solution over its protocol.
It's on the roadmap but a little far off until ParrotOS changes can be combined with the upstream package. It will make maintenance and turning it on by default much more easier.
It could be the VM is confused because apparently there are two types of mice attached. I assumed that by adding virtio-mouse it would override and replace the emulated one. Turns out its not this way and I went ahead and reverted this config which should be effective in the next release.
Oct 4 2018
@TNTBOMBOM I added a few more sites in a second paragraph first ticket. Please create the accounts when you have time.
Sep 17 2018
Sep 14 2018
Test out the (LUKS wrapper) Tomb implementation in KDE Vault. Should be around by Buster.
Sep 11 2018
Aug 17 2018
Template created: https://www.whonix.org/wiki/Template:Systemd-socket-proxyd
Offtopic: There is a PR from Algernon for intramfs packages, what s their status?
Aug 16 2018
Non-Debian dependencies and non materialization of TUF PyPi makes a secure way to obtain this package impossible.
Aug 15 2018
Old pull request:
Aug 12 2018
Done. Connects successfully even when Transparent TCP/DNS disabled on gateway. So it uses stream isolation out of the box and is ready for prime time.
Aug 10 2018
So what task remains for this DNS/TransPort leak testing?
He was busy those past few months and thought there was no interest. @Patrick Expect a new release this coming week.
Aug 9 2018
Aug 8 2018
Why not ping him first? Its a waste of good work otherwise.
Aug 7 2018
In theory, we could make sdwdate provide a local (default) (or optional opt-in server) NTP compatible time provider. Could be useful anyhow. -> sdwdate-server No idea how hard that would be.
And then configure NTP to connect only to that local NTP server.
Aug 6 2018
The easy way: calculating the offset between local time and the onion average in timesync then using ntpdate's slew option if the offset is less than 0.5s. Otherwise you tell it to step up the time immediately so that you are accurately mimicking the default behavior. However you can force slewing all the time with -B. This way you won't need to touch kernel syscalls as ntpdate should be able to do the operation for you.
From what I understand, this code path is only relevant when timesyncd is talking directly with NTP servers and reacting to replies about deltas between local and remote times. There is no way you can call that function from the command line when using timedatectl standalone AFAICT.
Aug 3 2018
Playing devil's advocate here: Ted Ts'o  expresses strong skepticism about the efficacy of RNGs that rely on CPU jitter. summary: CPU jitter may not be random as thought to someone who designed the CPU cache and know how its internals "tick" . So while these RNGs may not harm, another solution for RNG-less platforms may be a good idea.
An interesting implementation to work around early boot entropy scarcity with havegedis to include it in the initrd. May be hackish but could be easier for Marmarek than writing something at the EFI level.
Done. Asked about Xen too but they may not be familiar with its innards. You may want to contact the Xen devs directly using my message as a template.
Aug 2 2018
I think its worth asking the hypervisor devs if this applies for the platforms we care about.
Jul 31 2018
jitterentropy-rng should solve this and is a mainline Linux solution that works the same way haveged does. Please see: https://phabricator.whonix.org/T817
Jul 27 2018
Since we are interested in ntpd's default behavior (for blending in purposes) it turns out that it performs instant clock jumps once the delta difference is excessively large otherwise its slewing algorithm would take forever to adjust the time.
It doesn't seem that timedatectl supports gradual time adjustment. Our next best option is ntpd which can do so but cannot coexist with timedatectl - we can only run either but not both. According to popcon, ntpd is the mos widely used time daemon so its the natural choice.
Jul 25 2018
the time could be set with timedatectl by feeding it the time with this command:
Stretch+ uses systemd-timesyncd by default therefore its the most popular.
Jul 22 2018
@ng0 I wrote a proposal draft. Feel free to improve it before I post:
Jun 30 2018
Jun 29 2018
Check these alternatives out:
OK did so there
Jun 26 2018
Jun 25 2018
Jun 22 2018
Jun 20 2018
nftables transition info:
May 31 2018
Did a diff between the versions of the file before/after the change and here is the output:
May 30 2018
Perhaps Qubes guys can have the entropybroker package communicate over the qrexec protocol to seed entropy from a reliable source like Dom0 to the other domains.
May 18 2018
You can probably use virtio-rng since Qubes now runs on HVM mode and uses QEMU
If you would have read the chat content (which I assume you didn't), you would see some insight into the problems and what possible solutions there are.
May 16 2018
Project looks dead no recent releases.
All socat mentions here with 7 results, less if we want the relevant pages only: https://www.whonix.org/w/index.php?title=Special%3ASearch&profile=default&fulltext=Search&search=socat
@Patrick seems self explanatory. How are we doing on RAM use? Is it any more or less efficient than socat after you cut down the number of spawned instances?
I went ahead and reverted clflush restrictions to open the way for I2P by default without extra fiddling needed.
May 12 2018
Interesting backstory about this anti-feature in Debian. Nonetheless I've found a solution.
Interesting backstory about this anti-feature in Debian. Nonetheless I've found a solution.
May 8 2018
Apr 29 2018
The public tahoeLAFS introducer is dormant:
Mar 22 2018
A switch to /opt/ seems like a great compromise that can shut up the FHS zealots. Is this something the Guix guys can do or is it for the adopting distro to handle?
Mar 19 2018
Awesome progress thanks for the updates :) Right, the advantage of Guix is precisely that it doesn't follow FHS. Its worth trying to ask for an exception from upstream no matter how slim the chances - at least there's more of a chance than never asking at all.
Mar 2 2018
Mar 1 2018
NB for the record: with qemu-ga a guest can still shut itself off via crafted input to the agent. So besides removing timer access to the guest, there was no other advantage to removing ACPI.
Actually we don't have to suspend the guest. Execution of any command on the host after resume is enough to create a uniqu event in the qemu-ga's log file.
The proper and direct way to use virsh to communicate with guest agent:
The YAJL parser used in libvirt is tiny, modern (written in2007) and has no CVEs. It is an SAX type event-driven parser unlike the vulnerable, top-down recursive descent type that was used in QEMU.
Feb 28 2018
It turns out the QEMU guest agent warning was not relevant to those who use libvirt. With libvirt a safe parser is used. Breakouts can only happen if a process on the host is designed to parse guest input because there is no way to control that otherwise it should be safe for our uses. This potentially simplifies the design in many respects but a host package will still be needed. I will update the task list.
[libvirt-users] QEMU guest-agent safety in hostile VM?
*Most recet info on test grid can be found from their freenode IRC channel
Feb 27 2018
Asked the devs some questions about integration:
Whonix project metadata could be distributed using Tahoe-LAFS - a redundant, encrypted storage array accessible over Tor. Instructions to users about alternative download mechanisms of the project's code and documentation can be passed thru this channel.