https://wiki.nftables.org/wiki-nftables/index.php/Atomic_rule_replacement
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
May 16 2023
May 15 2023
Some progress.
May 9 2023
In other words, iptabels is already symlinked to iptabels-nft anyhow. Therefore Whonix is already using iptabels-nft.
Feb 17 2023
Jan 19 2023
Sep 8 2021
Aug 9 2021
In T509#20232, @ak88 wrote:Any updates on this?
Any updates on this?
Aug 13 2020
Aug 12 2020
After running a bunch of tcp ping tests, the conclusion is this attack
is not really effective against TCP like ICMP. The latency is much lower
for TCP pings and though it slightly decreases with cpu stress it is not
consistent. Reloading pages in TBB with cpu stress
on/off does not impact latency readings while doing so with tc
attached has massive latency foot prints - implying it will ironically make such attacks much easier in addition to degrading performance.
Aug 7 2020
Cyrus recommends adding delays per packet to disrupt inter-packet patterns that remain. The command can be fine tuned as such:
Aug 1 2020
The good news is I think I've figured out the equivalent tc-netem command looking the slot parameter in the manual:
May 30 2020
Ticket above closed and convo moved to tails-dev.
Apr 23 2020
Feb 14 2020
Dec 23 2019
Dec 11 2019
It looks like bpfilter is in rather early stages, and it's few years until we'll see it in Debian.
Or skip nftables and use Berkeley Packet Filter (BPF)?
Nov 21 2019
Not a problem anymore.
Nov 6 2019
Oct 21 2019
NonaSuomy:
Added requested NFTables example from duclicsic #netfilter freenode.
Oct 17 2019
Starting with Bullseye nftables will be the default:
Oct 15 2019
Oct 13 2019
Analysis by Cyrus cited here for completion:
Oct 6 2019
Reported build failures:
When an implementation is decided, let's decide if we can include this in security-misc for use on Linux hosts and Kicksecure. We would need some way in detecting the active NIC since on wireless systems wlan0 is the interface of choice and not eth0
tc-netem is a utility that is part of the iproute2 package in Debian. It leverages functionality already built into Linux and userspace utilities to simulate networks including packet delays and loss.
Aug 11 2019
Aug 9 2019
install electrum appimage by default:
https://github.com/Whonix/anon-meta-packages/commit/71d40f5316ee7eb38eb04142d80d23c56a48407b
Jul 6 2019
Any update?
Jun 27 2019
Will keep watching what Tails is doing.
May 12 2019
Maybe there is no need. It's just when Tails has a ticket, we should
check it at Whonix too. Thank you for looking into this, too!
The way it is now looks fine. Why would it need to be changed?
madaidan (madaidan):
madaidan added a comment.
> https://lists.ubuntu.com/archives/apparmor/2016-February/009371.html says it is used for various things so it might break some things. Wouldn't using a fake machine-id e.g. a bunch of zeroes fix this?
May 11 2019
https://lists.ubuntu.com/archives/apparmor/2016-February/009371.html says it is used for various things so it might break some things.
In T582#18461, @madaidan wrote:Would it cause any issues if the machine-id was just deleted or replaced with a bunch of 0s?
May 10 2019
Would it cause any issues if the machine-id was just deleted or replaced with a bunch of 0s?
Apr 6 2019
Unfortunately, not possible.
Feb 2 2019
The concept was documented for operational use. Auto Guard de-duplication considered too complex to deploy and manual checking is enough.
Jan 16 2019
Jan 13 2019
Done
Jan 6 2019
Jan 4 2019
Done. You can close this ticket once you agree with edits.
Jan 2 2019
Sounds good!
Dec 28 2018
From this size comparison on Debian wiki, I think the best and most secure option is the smallest and most minimal one: micro-httpd
Dec 22 2018
We still have the warning on https://www.whonix.org/wiki/Onion_Services.
Dec 9 2018
Dec 7 2018
Dec 3 2018
I think hiding the clock is a bad idea as a user may want to manually run sdwdate to adjust it if it's out of whack before initiating internet traffic. (This is on non-Qubes versions lacking auto time adjust)
Nov 20 2018
Oct 12 2018
Proposed implementations for multi-Tor suggested here:
The short story is that things get worse very quickly, but there is hope.
The analysis below assumes only the adversary that runs guards and not the local adversary like the host OS or the Whonix processes themselves.
In my analysis I assume a hypothetical adversarial guard bandwidth of 10% of the entire network. This is an arbitrary number since we don't know the real number, but it serves to show the trends as we increase the guards per client and number of clients per user. I do the kind of analysis we do in the Conflux[1] paper which is very relevant here, especially Table 3 and its discussion in section 5.2. I update the numbers and extend that analysis for the scenarios you have described.
- 1 guard/client, 1 client/user.
The adversary (i,e, the compromised guard) will have the ability to observe 10% of the clients and hence 10% users. This is the situation today.
- 2 guards/client, 1 client/user.
This is worse than 1 above. There is now a 18% probability that only one of the guards is compromised per client and a 1% chance that two guards are compromised per client. The probability of at least one bad guard is hence 19%. There really is not a real distinction between one or two bad guards from the user perspective since in both situations the client will go through a malicious guard in a short period of time, since the guard is picked uniformly at random from the guard set.
- 1 guard/client, 2 clients/user.
The observable clients again increase to 19% from the base 10% in 1 above. This means that if the user split her app (or group of apps) across the clients then there is a 19% chance that at least one of the app (groups) is compromised. However, for each client there is still only a 10% chance that a malicious guard is present. Is this configuration better than scenario 2 above? Perhaps, but let's look at the following scenario first.
- 2 guards/client, 2 clients/user.
The observable clients increases to 54%. This means that there is a 54% chance that at least one bad guard is present. This is worse than all the other scenarios above. However, if we fix apps (or groups of apps) to particular clients then we can compare to scenario 2 where the app group/client is analogous and the same analysis holds. Then, for each client there is again a 19% chance that there is a malicious guard present. If we compare to 3 above we can see that if we only use 1 guard/client then we can drop the exposure back down to 10% for that client and hence app group.
Taking the above into account we can get good results by keeping the guard set size to 1 and users spin up one client for each app. Then we can achieve at most 10% of apps compromised at *any given time* but not simultaneously. We can call this scenario (which is an extension of scenario 3) the 1 guard/app scenario (1G/A). See the appendix for more tweaks to decrease guard exposure.
If we want to consider 1G/A, then the next question for your user base is that is it better to either 1) have some portion of your apps compromised at *all* times (scenario 1G/A) or 2) have *all* your apps compromised some portion of the time (scenario 1). Tor tends to bend towards option 2, but then they have not considered the option of multi-client usage since it doesn't improve the situation in a non-compartmentalized setting, unlike the Whonix situation. I believe that option 2 is flawed because you never know if you are in fact currently compromised or not. It might be better to go ahead with assuming that you are compromised and mitigating that compromise to some portion of your network activity than all or nothing, which is what option 1 provides.
I hope that answers your questions. Please do not hesitate to get in touch again if you would like to discuss further. I think this is a very interesting problem area and would be happy to contribute to improving the situation.
Best regards,
Tariq Elahi[1] http://cacr.uwaterloo.ca/techreports/2013/cacr2013-16.pdf
Appendix
We can do better if we allow a user's clients to look at each other's lists to exclude guards that are already picked. The benefit would be that once the bad bandwith has been assigned it can no longer affect subsequent guard selections. However, clients looking at each other's memory space will compromise your vision of process containment. A zero knowledge/oblivious method for comparing guard lists might work to avoid this problem, and indeed the adversarial response will be weak since the best they can do is spread their bad bandwidth over many relays and at best return to the original exposure rate (e.g. 10%) but now with added costs of running many more relays.
Sep 20 2018
Sep 18 2018
Actually, the "apt-daily.timer: Adding 1h 17min 24.927437s random time" message have real impact, not only noise. Each time sdwdate change time, systemd adds a random delay to those timers. which means the timer will never expire (unless that random delay will happen to be very close to 0 - i.e. below the time until sdwdate change the time, which looks to be 1s).
Aug 16 2018
Non-Debian dependencies and non materialization of TUF PyPi makes a secure way to obtain this package impossible.
Aug 9 2018
Aug 8 2018
Aug 7 2018
Jul 25 2018
This is sorted in a later version of systemd.
Jul 24 2018
There are up to date Whonix 14 testers versions available.
Jul 22 2018
@ng0 I wrote a proposal draft. Feel free to improve it before I post:
Jul 19 2018
Jul 14 2018
We have now a DNS Certification Authority Authorization (CAA) Policy.
Jul 9 2018
Well, google is going to deprecate HPKP in chrome/chromium.
Same as T84#14765.
Jul 7 2018
Jun 29 2018
Check these alternatives out: