- User Since
- Nov 21 2014, 10:16 PM (230 w, 6 d)
Issue was discussed by Libvirt devs on RedHat bugzilla:
I even linked to a secure clipboard proposal that would have given a secure clipboard functionality by copying Qubes style interaction. It went no where and was closed as WONTFIX.
Thu, Apr 18
I also added the cli version to the non-qubes-vm-enhancements-cli section
Wed, Apr 17
zulucrypt works in Buster. Tomb does not.
Sun, Apr 14
Then I am wondering if we ought to install any of the following recommended packages too?
Fri, Apr 5
@Patrick What is the status of integration? Since we have kloak this is also a great defense to have. There is a script on there for packing as a deb:
Fri, Mar 29
Likely part of 5.2. We won't see it until the version after Buster unless we use backports.
Mar 26 2019
Can you think of any other app besides a browser that parses JS/Remote code that can manipulate it into requesting those particular addresses?
Mar 25 2019
On a second thought I wonder if this is still a Whonix specific fingerprinting vector. Any DNS request for 172.24.0.0 would resolve to bshc44ac76q3kskw.onion. Not something a remote website could exploit?
@Patrick Now we have to figure out how or if we can use the version in sid on Buster since it is no longer available in stable-next after the freeze. Let me know what you think and I will open a ticket for it is doable.
Mar 22 2019
Test the tomb LUKS container script as an alternative.
Feb 21 2019
What distro are you using?
Feb 18 2019
Other imporvements in this thread such as functioning SMTP gateways are also part of this ticket:
Good for the time being, if anyone wants to add more there is an outline of what procedures can be done, to add to.
Feb 2 2019
I created a user documentation page explaining this feature and when to use it for users to understand.
Moved to xfce so past comment is irrelevant. Will test Zulu after moving to Buster and add if it works.
@Patrick Was this only relevant for Retroshare?
The concept was documented for operational use. Auto Guard de-duplication considered too complex to deploy and manual checking is enough.
Mixmaster is not present in Buster BTW
Looks like someone beat us to it:
Ready to close if happy.
Middle of the range solution. How does this sound? Confirmed it falls within the private address CIDR:
Jan 31 2019
Jan 21 2019
Building initiates. I had these deps installed anyhow. Unpinning the CPU resolved some early build error, but now it craps out at RAW image creation. Not really related to your inquiry.
Jan 13 2019
Seems so. This one is a context menu option or commandline but it supports a lot more stuff than the original and it pulls in other specialized tools to do the work.
Jan 11 2019
Onionshare is in Buster.
Jan 4 2019
Done. You can close this ticket once you agree with edits.
Dec 28 2018
From this size comparison on Debian wiki, I think the best and most secure option is the smallest and most minimal one: micro-httpd
Dec 5 2018
My advice is to use a private address range reserved for this purpose by IANA. These will never be used in the future by anyone. Sine we use 10.x.x.x and moved away from 192.x.x.x, this leaves 172.x.x.x
Dec 3 2018
There's been research showing that trying to hide CPU information in a virtualizer is futile.
I think hiding the clock is a bad idea as a user may want to manually run sdwdate to adjust it if it's out of whack before initiating internet traffic. (This is on non-Qubes versions lacking auto time adjust)
Oct 28 2018
I disagree. Firetools makes administration easier and has a place on both VMs.
Oct 13 2018
We can now grab the browser tarball from the TPO onion instead which makes this ticket obsolete. Close if you concur.
Proposed implementations for multi-Tor suggested here:
The short story is that things get worse very quickly, but there is hope.
The analysis below assumes only the adversary that runs guards and not the local adversary like the host OS or the Whonix processes themselves.
In my analysis I assume a hypothetical adversarial guard bandwidth of 10% of the entire network. This is an arbitrary number since we don't know the real number, but it serves to show the trends as we increase the guards per client and number of clients per user. I do the kind of analysis we do in the Conflux paper which is very relevant here, especially Table 3 and its discussion in section 5.2. I update the numbers and extend that analysis for the scenarios you have described.
- 1 guard/client, 1 client/user. The adversary (i,e, the compromised guard) will have the ability to observe 10% of the clients and hence 10% users. This is the situation today.
- 2 guards/client, 1 client/user. This is worse than 1 above. There is now a 18% probability that only one of the guards is compromised per client and a 1% chance that two guards are compromised per client. The probability of at least one bad guard is hence 19%. There really is not a real distinction between one or two bad guards from the user perspective since in both situations the client will go through a malicious guard in a short period of time, since the guard is picked uniformly at random from the guard set.
- 1 guard/client, 2 clients/user. The observable clients again increase to 19% from the base 10% in 1 above. This means that if the user split her app (or group of apps) across the clients then there is a 19% chance that at least one of the app (groups) is compromised. However, for each client there is still only a 10% chance that a malicious guard is present. Is this configuration better than scenario 2 above? Perhaps, but let's look at the following scenario first.
- 2 guards/client, 2 clients/user. The observable clients increases to 54%. This means that there is a 54% chance that at least one bad guard is present. This is worse than all the other scenarios above. However, if we fix apps (or groups of apps) to particular clients then we can compare to scenario 2 where the app group/client is analogous and the same analysis holds. Then, for each client there is again a 19% chance that there is a malicious guard present. If we compare to 3 above we can see that if we only use 1 guard/client then we can drop the exposure back down to 10% for that client and hence app group.
Taking the above into account we can get good results by keeping the guard set size to 1 and users spin up one client for each app. Then we can achieve at most 10% of apps compromised at *any given time* but not simultaneously. We can call this scenario (which is an extension of scenario 3) the 1 guard/app scenario (1G/A). See the appendix for more tweaks to decrease guard exposure.
If we want to consider 1G/A, then the next question for your user base is that is it better to either 1) have some portion of your apps compromised at *all* times (scenario 1G/A) or 2) have *all* your apps compromised some portion of the time (scenario 1). Tor tends to bend towards option 2, but then they have not considered the option of multi-client usage since it doesn't improve the situation in a non-compartmentalized setting, unlike the Whonix situation. I believe that option 2 is flawed because you never know if you are in fact currently compromised or not. It might be better to go ahead with assuming that you are compromised and mitigating that compromise to some portion of your network activity than all or nothing, which is what option 1 provides.
I hope that answers your questions. Please do not hesitate to get in touch again if you would like to discuss further. I think this is a very interesting problem area and would be happy to contribute to improving the situation.
Best regards, Tariq Elahi
Appendix We can do better if we allow a user's clients to look at each other's lists to exclude guards that are already picked. The benefit would be that once the bad bandwith has been assigned it can no longer affect subsequent guard selections. However, clients looking at each other's memory space will compromise your vision of process containment. A zero knowledge/oblivious method for comparing guard lists might work to avoid this problem, and indeed the adversarial response will be weak since the best they can do is spread their bad bandwidth over many relays and at best return to the original exposure rate (e.g. 10%) but now with added costs of running many more relays.
Sorry not reproducible on my end. May be related to the fact that you are running a non-standard setup with custom compiled binaries. By running packages from your distro there is a higher chance that bugs are more visible for more people and more likely to be fixed.
Oct 12 2018
Closing. duplicate of:
There is nothing dead about it. I jsut explained this on the forum. It is perfectly workable and openprivacy is owrking on creating a P2P asynchronous chat solution over its protocol.
It's on the roadmap but a little far off until ParrotOS changes can be combined with the upstream package. It will make maintenance and turning it on by default much more easier.
It could be the VM is confused because apparently there are two types of mice attached. I assumed that by adding virtio-mouse it would override and replace the emulated one. Turns out its not this way and I went ahead and reverted this config which should be effective in the next release.
Oct 4 2018
@TNTBOMBOM I added a few more sites in a second paragraph first ticket. Please create the accounts when you have time.
Sep 17 2018
Sep 14 2018
Test out the (LUKS wrapper) Tomb implementation in KDE Vault. Should be around by Buster.
Sep 11 2018
Aug 17 2018
Template created: https://www.whonix.org/wiki/Template:Systemd-socket-proxyd
Offtopic: There is a PR from Algernon for intramfs packages, what s their status?
Aug 16 2018
Non-Debian dependencies and non materialization of TUF PyPi makes a secure way to obtain this package impossible.
Aug 15 2018
Old pull request:
Aug 12 2018
Done. Connects successfully even when Transparent TCP/DNS disabled on gateway. So it uses stream isolation out of the box and is ready for prime time.
Aug 10 2018
So what task remains for this DNS/TransPort leak testing?
He was busy those past few months and thought there was no interest. @Patrick Expect a new release this coming week.
Aug 9 2018
Aug 8 2018
Why not ping him first? Its a waste of good work otherwise.
Aug 7 2018
In theory, we could make sdwdate provide a local (default) (or optional opt-in server) NTP compatible time provider. Could be useful anyhow. -> sdwdate-server No idea how hard that would be.
And then configure NTP to connect only to that local NTP server.
Aug 6 2018
The easy way: calculating the offset between local time and the onion average in timesync then using ntpdate's slew option if the offset is less than 0.5s. Otherwise you tell it to step up the time immediately so that you are accurately mimicking the default behavior. However you can force slewing all the time with -B. This way you won't need to touch kernel syscalls as ntpdate should be able to do the operation for you.
From what I understand, this code path is only relevant when timesyncd is talking directly with NTP servers and reacting to replies about deltas between local and remote times. There is no way you can call that function from the command line when using timedatectl standalone AFAICT.
Aug 3 2018
Playing devil's advocate here: Ted Ts'o  expresses strong skepticism about the efficacy of RNGs that rely on CPU jitter. summary: CPU jitter may not be random as thought to someone who designed the CPU cache and know how its internals "tick" . So while these RNGs may not harm, another solution for RNG-less platforms may be a good idea.
An interesting implementation to work around early boot entropy scarcity with havegedis to include it in the initrd. May be hackish but could be easier for Marmarek than writing something at the EFI level.
Done. Asked about Xen too but they may not be familiar with its innards. You may want to contact the Xen devs directly using my message as a template.
Aug 2 2018
I think its worth asking the hypervisor devs if this applies for the platforms we care about.
Jul 31 2018
jitterentropy-rng should solve this and is a mainline Linux solution that works the same way haveged does. Please see: https://phabricator.whonix.org/T817
Jul 27 2018
Since we are interested in ntpd's default behavior (for blending in purposes) it turns out that it performs instant clock jumps once the delta difference is excessively large otherwise its slewing algorithm would take forever to adjust the time.
It doesn't seem that timedatectl supports gradual time adjustment. Our next best option is ntpd which can do so but cannot coexist with timedatectl - we can only run either but not both. According to popcon, ntpd is the mos widely used time daemon so its the natural choice.
Jul 25 2018
the time could be set with timedatectl by feeding it the time with this command:
Stretch+ uses systemd-timesyncd by default therefore its the most popular.
Jul 22 2018
@ng0 I wrote a proposal draft. Feel free to improve it before I post:
Jun 30 2018
Jun 29 2018
Check these alternatives out:
OK did so there
Jun 26 2018
Jun 25 2018
Jun 22 2018
Jun 20 2018
nftables transition info:
May 31 2018
Did a diff between the versions of the file before/after the change and here is the output:
May 30 2018
Perhaps Qubes guys can have the entropybroker package communicate over the qrexec protocol to seed entropy from a reliable source like Dom0 to the other domains.