The president’s “official act” would be issuing the pardon. The referenced Supreme Court decision just means it would not be a “crime” for the president to issue said pardon, not that the pardon would stand.
The president’s “official act” would be issuing the pardon. The referenced Supreme Court decision just means it would not be a “crime” for the president to issue said pardon, not that the pardon would stand.
The president can’t intervene at the state level. From americanbar.org:
A U.S. president has broad but not unlimited powers to pardon. For example, a president cannot pardon someone for a state crime.
This ignores the first part of my response - if I, as a legitimate user, might get caught up in one of these trees, either by mistakenly approving a bot, or approving a user who approves a bot, and I risk losing my account if this happens, what is my incentive to approve anyone?
Additionally, let’s assume I’m a really dumb bot creator, and I keep all of my bots in the same tree. I don’t bother to maintain a few legitimate accounts, and I don’t bother to have random users approve some of the bots. If my entire tree gets nuked, it’s still only a few weeks until I’m back at full force.
With a very slightly smarter bot creator, you also won’t have a nice tree:
As a new user looking for an approver, how do I know I’m not requesting (or otherwise getting) approved by a bot? To appear legitimate, they would be incentivized to approve legitimate users, in addition to bots.
A reasonably intelligent bot creator would have several accounts they directly control and use legitimately (this keeps their foot in the door), would mix reaching out to random users for approval with having bots approve bots, and would approve legitimate users in addition to bots. The tree ends up as much more of a tangled graph.
This ignores the first part of my response - if I, as a legitimate user, might get caught up in one of these trees, either by mistakenly approving a bot, or approving a user who approves a bot, and I risk losing my account if this happens, what is my incentive to approve anyone?
Additionally, let’s assume I’m a really dumb bot creator, and I keep all of my bots in the same tree. I don’t bother to maintain a few legitimate accounts, and I don’t bother to have random users approve some of the bots. If my entire tree gets nuked, it’s still only a few weeks until I’m back at full force.
With a very slightly smarter bot creator, you also won’t have a nice tree:
As a new user looking for an approver, how do I know I’m not requesting (or otherwise getting) approved by a bot? To appear legitimate, they would be incentivized to approve legitimate users, in addition to bots.
A reasonably intelligent bot creator would have several accounts they directly control and use legitimately (this keeps their foot in the door), would mix reaching out to random users for approval with having bots approve bots, and would approve legitimate users in addition to bots. The tree ends up as much more of a tangled graph.
I think this would be too limiting for humans, and not effective for bots.
As a human, unless you know the person in real life, what’s the incentive to approve them, if there’s a chance you could be banned for their bad behavior?
As a bot creator, you can still achieve exponential growth - every time you create a new bot, you have a new approver, so you go from 1 -> 2 -> 4 -> 8. Even if, on average, you had to wait a week between approvals, in 25 weeks (less that half a year), you could have over 33 million accounts. Even if you play it safe, and don’t generate/approve the maximal accounts every week, you’d still have hundreds of thousands to millions in a matter of weeks.
Are they Bluetooth headphones? If so, check the protocols supported by your phone, and by the headphones, e.g. aptX vs LDAC vs SBC. It’s possible that it’s not a “downgrade” on the new phone, but rather an upgrade to a better protocol, but unfortunately not one compatible with your headphones, so you end up using a low quality fallback.
You may also want to check your settings, and see if you can select a specific protocol. Sometimes a lesser protocol is chosen by default, if the better protocol uses more battery. This may be available to you in the phone settings, or as an option in an app for the headphones (e.g. my Anker Soundcore app allows choosing between two protocols).
If your drive is the bottleneck, this will make things worse. If you want to proceed:
You’re already using ffmpeg to get the sequence of frames, correct? You can add the -ss
and -t
flags to give a start time and a duration. Generate a list of offsets by dividing the length of video by the number of processes you want, and feed them through gnu parallel to your ffmpeg command.
My first thought was similar - there might be some hardware acceleration happening for the jpgs that isn’t for the other formats, resulting in a CPU bottleneck. A modern harddrive over USB3.0 should be capable of hundreds of megabits to several gigabits per second. It seems unlikely that’s your bottleneck (though you can feel free to share stats and correct the assumption if this is incorrect - if your pngs are in the 40 megabyte range, your 3.5 per second would be pretty taxing).
If you are seeing only 1 CPU core at 100%, perhaps you could split the video clip, and process multiple clips in parallel?
If your computer is compromised to the point someone can read the key, read words 2-5 again.
This is FUD. Even if Signal encrypted the local data, at the point someone can run a process on your system, there’s nothing to stop the attacker from adding a modified version of the Signal app, updating your path, shortcuts, etc to point to the malicious version, and waiting for you to supply the pin/password. They can siphon the data off then.
Anyone with actual need for concern should probably only be using their phone anyway, because it cuts your attack surface by half (more than half if you have multiple computers), and you can expect to be in possession/control of your phone at all times, vs a computer that is often left unattended.
it doesn’t unravel the underlying complexity of what it does… these alternative syntaxes tend to make some easy cases easy, but they have no idea what to do with more complicated cases
This can be said of any higher-level language, or API. There is always a cost to abstraction. Binary -> Assembly -> C -> Python. As you go up that chain, many things get easier, but some things become impossible. You always have the option to drop down, though, and these regex tools are no different. Software development, sysops, devops, etc are full of compromises like this.
You are conflating the concept and the implementation. PFS is a feature of network protocols, and they are a frequently cited example, but they are not part of the definition. From your second link, the definition is:
Perfect forward secrecy (PFS for short) refers to the property of key-exchange protocols (Key Exchange) by which the exposure of long-term keying material, used in the protocol to authenticate and negotiate session keys, does not compromise the secrecy of session keys established before the exposure.
And your third link:
Forward secrecy (FS): a key management scheme ensures forward secrecy if an adversary that corrupts (by a node compromise) a set of keys at some generations j and prior to generation i, where 1 ≤ j < i, is not able to use these keys to compute a usable key at a generation k where k ≥ i.
Neither of these mention networks, only protocols/schemes, which are concepts. Cryptography exists outside networks, and outside computer science (even if that is where it finds the most use).
Funnily enough, these two definitions (which I’ll remind you, come from the links you provided) are directly contradictory. The first describes protecting information “before the exposure” (i.e. past messages), while the second says a compromise at j
cannot be used to compromise k
, where k
is strictly greater than j
(i.e. a future message). So much for the hard and fast definition from “professional cryptographers.”
Now, what you’ve described with matrix sounds like it is having a client send old messages to the server, which are then sent to another client. The fact the content is old is irrelevant - the content is sent in new messages, using new sessions, with new keys. This is different from what I described, about a new client downloading old messages (encrypted with the original key) from the server. In any case, both of these scenarios create an attack vector through which an adversary can get all of your old messages, which, whether you believe violates PFS by your chosen definition or not, does defeat its purpose (perhaps you prefer this phrasing to “break” or “breach”).
This seems to align with what you said in your first response, that Signal’s goal is to “limit privacy leaks,” which I agree with. I’m not sure why we’ve gotten so hung up on semantics.
I wasn’t going to address this, but since you brought it up twice, running a forum is not much of a credential. Anyone can start a forum. There are forums for vaxxers and forums for antivaxxers, forums for atheists and forums for believers, forums for vegans and forums for carnivores. Not everyone running these forums is an expert, and necessarily, not all of them are “right.” This isn’t to say you don’t have any knowledge of the subject matter, only that running a forum isn’t proof you do.
If you’d like to reply, you may have the last word.
I would argue that it is not limited to network traffic, it is the general concept that historical information is not compromised, even if current (including long-term) secrets are compromised.
From my comment earlier:
There is no sharing of messages between linked devices - that would break forward secrecy
This describes devices linked to an account, where each is retrieving messages from the server - not a point-to-point transfer, which is how data is transferred from one Android device to another. If a new device could retrieve and decrypt old messages on the server, that would be a breach of the forward security concept.
Signal Desktop does not support transferring message history to or from any device.
You’re describing something very different - you already have the messages, and you already have them decrypted. You can transfer them without the keys. If someone gets your device, they have them, too.
Whether Signal keeps the encrypted the messages or not, a new device has no way of getting the old messages from the server.
“They” is the browser/browser maker. The browser, acting as the client, would have access to the keys and data. The browser maker could do whatever they want with it.
To be clear, I’m not saying they would, only that it defeats the purpose of an E2E chat, where your goal is to minimize/eliminate the possibility of snooping.
Using an E2E chat app in your browser necessarily makes the keys and decrypted messages available to your browser. They would have the ability to read messages, impersonate users, alter messages, etc. It would defeat the purpose of a secure messaging platform.
There is no sharing of messages between linked devices - that would break forward secrecy, which prevents a successful attacker from getting historical messages. See the first bullet of: https://support.signal.org/hc/en-us/articles/360007320551-Linked-Devices
Messages are encrypted per device, not per user (https://signal.org/docs/specifications/sesame/), and forward secrecy is preserved (https://en.m.wikipedia.org/wiki/Forward_secrecy, for the concept in general, and https://signal.org/docs/specifications/doubleratchet/ for Signal’s specific approach).
Yes, as long as you set up the desktop client before sending the message.
Messages sent with Signal are encrypted per device, not per user, so if your desktop client doesn’t exist when the message is sent, it is never encrypted and sent for that device.
When you set up a new client, you will only see new messages.
See https://signal.org/docs/specifications/sesame/ for details.
This is not entirely correct. Messages are stored on their servers temporarily (last I saw, for up to 30 days), so that even if your device is offline for a while, you still get all your messages.
In theory, you could have messages waiting in your queue for device A, when you add device B, but device B will still not get the messages, even though the encrypted message is still on their servers.
This is because messages are encrypted per device, rather than per user. So if you have a friend who uses a phone and computer, and you also use a phone and computer, the client sending the message encrypts it three times, and sends each encrypted copy to the server. Each client then pulls its copy, and decrypts it. If a device does not exist when the message is encrypted and sent, it is never encrypted for that device, so that new device cannot pull the message down and decrypt it.
For more details: https://signal.org/docs/specifications/sesame/
I’ve personally lived in places where the closest convenience store was 2.25 km, and the grocery store was nearly 18km, as well as places where a convenience store was literally a part of my building, and grocery stores were walkable distances.
The U.S. is enormous and varied. Take a look at truesizeof and compare the U.S. and Europe (don’t forget to add Alaska and Hawaii - they won’t be included in the contiguous states). Consider how different London is from rural Romania.