Deep fakes

5 views
Skip to first unread message

John Clark

unread,
Oct 3, 2025, 8:14:07 AM (3 days ago) Oct 3
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
I asked Open AI's GPT the following question. I asked the question because OpenAI released Sora-2 three days ago and it's super good at making deep fakes.  

"Would it be possible theoretically for a video camera to digitally sign the time signal from the National Bureau of Standards and GPS information so that you could prove that a video file was created at a specific time and at a specific place and was exactly the same information as what the camera recorded and had not been edited? I ask this because I think deep fakes are going to become a big problem."

GPT gave me the following answer: 

--
Short answer

Yes — in principle a camera can produce cryptographically signed video files that prove when and where they were recorded and that the file matches what the camera captured (i.e., has not been altered). Doing that securely in practice requires careful hardware, cryptography, and anti-spoofing measures; without those, attackers (or clever deep-fake creators) can still defeat the system.

Below I give a practical architecture, how verification works, the main failure modes, and realistic mitigations.


How it would work (high level)

  1. Trusted hardware root-of-trust in the camera (a secure element or TPM) holds a private key that never leaves the device.

  2. At capture time the camera:

    • Records the raw sensor data (frames) and metadata (timestamp, GPS fix, IMU readings, camera settings).

    • Computes cryptographic digests (hashes) of the raw data in small chunks (e.g., per frame or per second).

    • Attaches metadata (time, GPS coordinates, firmware version, monotonic counter, device ID).

    • Signs those digests+metadata with the device private key (e.g., ECDSA or Ed25519 on SHA-256).

    • Optionally writes hashes into an append-only local log and transmits the log to an external witness / timestamping service (see below).

  3. The camera outputs a signed video bundle that contains the video frames (or encoded stream), the metadata, signatures, and the device certificate chain (public key certs).

  4. To verify later, a verifier checks:

    • The signature is valid under the device public key.

    • The device certificate is trusted (issued by a known CA or manufacturer).

    • The timestamp and GPS claims are consistent with other records (witness servers, block anchors, IMU).

    • The video frames match the signed hashes (so no editing occurred).


Concrete components (recommended)

  • Secure element (hardware root-of-trust): stores private key and performs signing operations.

  • Monotonic counter / secure clock: prevents replay of old signatures and provides tamper-evidence.

  • Signed metadata per chunk: include (frame-range, SHA-256(frame data), timestamp, latitude, longitude, altitude, IMU snapshot, firmware hash, counter).

  • Chain-of-hashes: link chunk i to i-1 by including previous chunk hash in the next signed metadata (hash chain) so removal/re-ordering is detectable.

  • External time/witnessing:

    • Send periodic signed digests (or full signed metadata) to one or more independent servers (over TLS) that log the receipt time; this provides an external timestamp and cross-check.

    • Optionally anchor digests to a public append-only ledger (e.g., a blockchain or public notarization service) so signatures can be independently verified later even if device is destroyed.

  • Certificate management:

    • Each device has a certificate signed by a manufacturer CA (or trusted PKI).

    • Include certificate revocation checking (OCSP/CRL) for long-term validity claims.

  • GNSS anti-spoofing / multi-source positioning:

    • Use GNSS receiver that supports anti-spoofing or authenticated GNSS when available.

    • Cross-check GPS with IMU/vision-based odometry and with network-based location when online.

  • Forensic metadata:

    • Include raw sensor noise fingerprint, lens parameters, EXIF-like details that are hard to fabricate precisely.


Example of a signed chunk (illustrative JSON)

{ "device_id": "CAM12345", "chunk_index": 42, "timestamp_utc": "2025-10-03T12:34:56Z", "gps": {"lat": 40.7128, "lon": -74.0060, "alt": 10.2}, "imu": {"accel": [...], "gyro": [...]}, "frame_hash": "sha256:3a7bd3...", "prev_hash": "sha256:9f2c1b...", "firmware_hash": "sha256:ab12cd...", "monotonic_counter": 1024, "signature": "ecdsa-secp256r1:3045..." }

(Verifier would check signature using the device public key, ensure frame_hash matches the file, confirm prev_hash chains, and validate certificate status.)


Verification steps an auditor would perform

  1. Verify signature(s) with the public key included in the device certificate chain.

  2. Validate the certificate chain to a trusted root and check for revocation.

  3. Recompute hashes of the video frames. Confirm they match the signed frame_hash values and that hash chain is intact.

  4. Compare timestamps and GPS to external witnesses (server receipts, network logs) and to the device’s IMU/vision consistency.

  5. Check firmware hash and device provenance to detect a tampered camera.

  6. If anchored to a public ledger, verify the digest appears at the stated ledger time.


Main attack/failure modes (and mitigations)

  • Private key compromise: if attacker extracts the device private key they can sign fakes.
    Mitigation: secure element, tamper-evident hardware, key destruction on tamper detection, certificate revocation.

  • GPS spoofing: attacker can feed fake GNSS signals to claim a false location/time.
    Mitigation: GNSS anti-spoofing, cross-check with IMU and external witnesses, multiple positioning sources.

  • Pre-recording / staging: video could be recorded earlier and only signed later, or a fake staged event could be captured by a compromised camera.
    Mitigation: real-time external challenge-response: require the camera to include unpredictable external data (a nonce) from a trusted server at capture time; or streaming signed frames to a separate server that records the receipt time.

  • Firmware compromise: a malicious firmware could sign doctored content.
    Mitigation: secure boot, firmware attestations, signed firmware images and inclusion of firmware hash in signatures.

  • Chain-of-custody / legal admissibility: even technical proof may be disputed in court if the PKI, manufacturer practices, or witness services are untrusted.
    Mitigation: well-documented standards, independent audits, multiple independent witnesses.


Practicality & realism

  • Technically feasible now: many building blocks exist (secure elements, digital signatures, timestamping services).

  • Deploying this broadly is an engineering + policy effort: manufacturers must include secure hardware, certificate infrastructure, anti-spoofing features, and accepted auditing standards.

  • Even with perfect signing, signatures prove the camera produced and signed the bits at a claimed time/location — they don't prove the semantics (that what appears in the frame is an uncontrived or non-staged event). Human context and corroborating evidence remain important.


Additional strong protections you can add

  • Live streaming of signed frames to an independent server (witness) prevents post-capture tampering and gives a realtime external log.

  • Multi-party notarization: have multiple unrelated servers each log the digest — increases difficulty for attacker to fake all witnesses.

  • Randomized visible challenge: the verifier or witness server issues a random code that the camera must visibly display (or incorporate into audio/visual) and sign; this proves liveness and synchrony.

  • Public anchoring: anchor chunk hashes daily to a public immutable ledger for long-term proof.


Bottom line

Yes, you can make video files that are cryptographically provably the exact bytes a camera produced at a claimed time and place — provided the camera’s hardware, firmware, and the ecosystem (PKI, witnesses, anti-spoofing) are trustworthy. That system raises the bar substantially against deepfakes and post-capture edits, but it is not a perfect silver bullet: attackers who compromise keys, firmware, or the capture process itself (staging, spoofing) can still produce convincing fakes. Strong system design (secure hardware, external witnesses, anti-spoofing, and audited procedures) is required to make such proofs robust in practice.


If you’d like, I can:

  • Draft a short technical spec for a “forensic-signing camera” (interfaces, signed bundle format, certificate lifecycle), or

  • Produce a sample verifier script (pseudocode/JSON examples) that demonstrates how to check signatures and hashes. Which would you prefer?

John K Clark    See what's on my new list at  Extropolis
sdf

Reply all
Reply to author
Forward
0 new messages