- User Since
- Nov 21 2014, 10:16 PM (269 w, 3 d)
Tue, Jan 7
An interesting product that triggers a system wipe if the cable is pulled:
Dec 8 2019
Oct 17 2019
Starting with Bullseye nftables will be the default:
Oct 15 2019
Oct 13 2019
Analysis by Cyrus cited here for completion:
Oct 10 2019
Already packaged in Debian but is currently orphaned and needs a maintainer accoridng to its ex-maintainer:
Oct 7 2019
An alternative proposal for editing ISNs without involving the kernel:
Oct 6 2019
When an implementation is decided, let's decide if we can include this in security-misc for use on Linux hosts and Kicksecure. We would need some way in detecting the active NIC since on wireless systems wlan0 is the interface of choice and not eth0
tc-netem is a utility that is part of the iproute2 package in Debian. It leverages functionality already built into Linux and userspace utilities to simulate networks including packet delays and loss.
Oct 5 2019
TPM hw not working. Troubleshooting thread:
Oct 4 2019
Jul 22 2019
Yes Zulucrypt included and functional on KVM 15. However fixes for both zulucrypt and tomb haven't made it into Buster from what I've tested. Zulucrypt has a tomb plugin to open Tomb files too.
Whonix 15 has since come out. Has this been resolved? Not reproducible on Debian Buster either.
Problem has since been reported and fixed upstream. Let's look into re-including by Bullseye.
This bug does not exist on Debian stable after I upgraded. I have it documented for Arch and a work around for it. Nothing more to be done on my end.
May 22 2019
@Patrick were you able to reproduce this? I wasn't
He was a major dev/creator of CoyIM but not the only one.
His detailed reply:
Accepted as optional feature/usecase. Moved implementation design from protocol level to spice-gtk.
May 3 2019
Related thread on general kernel hardening:
May 1 2019
Pass 10000 - Fail 0 - Rounds 10000
Apr 30 2019
Apr 25 2019
Issue was discussed by Libvirt devs on RedHat bugzilla:
I even linked to a secure clipboard proposal that would have given a secure clipboard functionality by copying Qubes style interaction. It went no where and was closed as WONTFIX.
Apr 18 2019
I also added the cli version to the non-qubes-vm-enhancements-cli section. It is a dep of a gui install but not vice versa. Zulucrypt plugin package was added there too since enchancements-cli is a subset of enhancements-gui.
Apr 17 2019
zulucrypt works in Buster. Tomb does not.
Apr 14 2019
Then I am wondering if we ought to install any of the following recommended packages too?
Apr 5 2019
@Patrick What is the status of integration? Since we have kloak this is also a great defense to have. There is a script on there for packing as a deb:
Mar 29 2019
Likely part of 5.2. We won't see it until the version after Buster unless we use backports.
Mar 26 2019
Can you think of any other app besides a browser that parses JS/Remote code that can manipulate it into requesting those particular addresses?
Mar 25 2019
On a second thought I wonder if this is still a Whonix specific fingerprinting vector. Any DNS request for 172.24.0.0 would resolve to bshc44ac76q3kskw.onion. Not something a remote website could exploit?
@Patrick Now we have to figure out how or if we can use the version in sid on Buster since it is no longer available in stable-next after the freeze. Let me know what you think and I will open a ticket for it is doable.
Mar 22 2019
Test the tomb LUKS container script as an alternative.
Feb 21 2019
What distro are you using?
Feb 18 2019
Other imporvements in this thread such as functioning SMTP gateways are also part of this ticket:
Good for the time being, if anyone wants to add more there is an outline of what procedures can be done, to add to.
Feb 2 2019
I created a user documentation page explaining this feature and when to use it for users to understand.
Moved to xfce so past comment is irrelevant. Will test Zulu after moving to Buster and add if it works.
@Patrick Was this only relevant for Retroshare?
The concept was documented for operational use. Auto Guard de-duplication considered too complex to deploy and manual checking is enough.
Mixmaster is not present in Buster BTW
Looks like someone beat us to it:
Ready to close if happy.
Middle of the range solution. How does this sound? Confirmed it falls within the private address CIDR:
Jan 31 2019
Jan 21 2019
Building initiates. I had these deps installed anyhow. Unpinning the CPU resolved some early build error, but now it craps out at RAW image creation. Not really related to your inquiry.
Jan 13 2019
Seems so. This one is a context menu option or commandline but it supports a lot more stuff than the original and it pulls in other specialized tools to do the work.
Jan 11 2019
Onionshare is in Buster.
Jan 4 2019
Done. You can close this ticket once you agree with edits.
Dec 28 2018
From this size comparison on Debian wiki, I think the best and most secure option is the smallest and most minimal one: micro-httpd
Dec 5 2018
My advice is to use a private address range reserved for this purpose by IANA. These will never be used in the future by anyone. Sine we use 10.x.x.x and moved away from 192.x.x.x, this leaves 172.x.x.x
Dec 3 2018
There's been research showing that trying to hide CPU information in a virtualizer is futile.
I think hiding the clock is a bad idea as a user may want to manually run sdwdate to adjust it if it's out of whack before initiating internet traffic. (This is on non-Qubes versions lacking auto time adjust)
Oct 28 2018
I disagree. Firetools makes administration easier and has a place on both VMs.
Oct 13 2018
We can now grab the browser tarball from the TPO onion instead which makes this ticket obsolete. Close if you concur.
Proposed implementations for multi-Tor suggested here:
The short story is that things get worse very quickly, but there is hope.
The analysis below assumes only the adversary that runs guards and not the local adversary like the host OS or the Whonix processes themselves.
In my analysis I assume a hypothetical adversarial guard bandwidth of 10% of the entire network. This is an arbitrary number since we don't know the real number, but it serves to show the trends as we increase the guards per client and number of clients per user. I do the kind of analysis we do in the Conflux paper which is very relevant here, especially Table 3 and its discussion in section 5.2. I update the numbers and extend that analysis for the scenarios you have described.
- 1 guard/client, 1 client/user.
The adversary (i,e, the compromised guard) will have the ability to observe 10% of the clients and hence 10% users. This is the situation today.
- 2 guards/client, 1 client/user.
This is worse than 1 above. There is now a 18% probability that only one of the guards is compromised per client and a 1% chance that two guards are compromised per client. The probability of at least one bad guard is hence 19%. There really is not a real distinction between one or two bad guards from the user perspective since in both situations the client will go through a malicious guard in a short period of time, since the guard is picked uniformly at random from the guard set.
- 1 guard/client, 2 clients/user.
The observable clients again increase to 19% from the base 10% in 1 above. This means that if the user split her app (or group of apps) across the clients then there is a 19% chance that at least one of the app (groups) is compromised. However, for each client there is still only a 10% chance that a malicious guard is present. Is this configuration better than scenario 2 above? Perhaps, but let's look at the following scenario first.
- 2 guards/client, 2 clients/user.
The observable clients increases to 54%. This means that there is a 54% chance that at least one bad guard is present. This is worse than all the other scenarios above. However, if we fix apps (or groups of apps) to particular clients then we can compare to scenario 2 where the app group/client is analogous and the same analysis holds. Then, for each client there is again a 19% chance that there is a malicious guard present. If we compare to 3 above we can see that if we only use 1 guard/client then we can drop the exposure back down to 10% for that client and hence app group.
Taking the above into account we can get good results by keeping the guard set size to 1 and users spin up one client for each app. Then we can achieve at most 10% of apps compromised at *any given time* but not simultaneously. We can call this scenario (which is an extension of scenario 3) the 1 guard/app scenario (1G/A). See the appendix for more tweaks to decrease guard exposure.
If we want to consider 1G/A, then the next question for your user base is that is it better to either 1) have some portion of your apps compromised at *all* times (scenario 1G/A) or 2) have *all* your apps compromised some portion of the time (scenario 1). Tor tends to bend towards option 2, but then they have not considered the option of multi-client usage since it doesn't improve the situation in a non-compartmentalized setting, unlike the Whonix situation. I believe that option 2 is flawed because you never know if you are in fact currently compromised or not. It might be better to go ahead with assuming that you are compromised and mitigating that compromise to some portion of your network activity than all or nothing, which is what option 1 provides.
I hope that answers your questions. Please do not hesitate to get in touch again if you would like to discuss further. I think this is a very interesting problem area and would be happy to contribute to improving the situation.
We can do better if we allow a user's clients to look at each other's lists to exclude guards that are already picked. The benefit would be that once the bad bandwith has been assigned it can no longer affect subsequent guard selections. However, clients looking at each other's memory space will compromise your vision of process containment. A zero knowledge/oblivious method for comparing guard lists might work to avoid this problem, and indeed the adversarial response will be weak since the best they can do is spread their bad bandwidth over many relays and at best return to the original exposure rate (e.g. 10%) but now with added costs of running many more relays.
Sorry not reproducible on my end. May be related to the fact that you are running a non-standard setup with custom compiled binaries. By running packages from your distro there is a higher chance that bugs are more visible for more people and more likely to be fixed.
Oct 12 2018
Closing. duplicate of:
There is nothing dead about it. I jsut explained this on the forum. It is perfectly workable and openprivacy is owrking on creating a P2P asynchronous chat solution over its protocol.
It's on the roadmap but a little far off until ParrotOS changes can be combined with the upstream package. It will make maintenance and turning it on by default much more easier.
It could be the VM is confused because apparently there are two types of mice attached. I assumed that by adding virtio-mouse it would override and replace the emulated one. Turns out its not this way and I went ahead and reverted this config which should be effective in the next release.
Oct 4 2018
@TNTBOMBOM I added a few more sites in a second paragraph first ticket. Please create the accounts when you have time.
Sep 17 2018
Sep 14 2018
Test out the (LUKS wrapper) Tomb implementation in KDE Vault. Should be around by Buster.
Sep 11 2018
Aug 17 2018
Template created: https://www.whonix.org/wiki/Template:Systemd-socket-proxyd
Offtopic: There is a PR from Algernon for intramfs packages, what s their status?
Aug 16 2018
Non-Debian dependencies and non materialization of TUF PyPi makes a secure way to obtain this package impossible.
Aug 15 2018
Old pull request: