BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP;TZID=America/Chicago:20260409T002038
UID:208217@calendar.wisc.edu
DTSTART;TZID=America/Chicago:20250404T110000
DTEND;TZID=America/Chicago:20250404T120000
DESCRIPTION:Scott Aaronson (University of Texas at Austin). Progress on AI 
 safety and alignment has been almost entirely empirical. In this talk\, I'
 ll survey a few areas where theoretical computer science can contribute to
  AI safety:\n- How can we robustly watermark the outputs of Large Language
  Models and other generative AI systems\, to help identify academic cheati
 ng\, deepfakes\, and AI-enabled fraud?  \n- Can one insert undetectable cr
 yptographic backdoors into neural nets\, for good or ill?\n- Should we exp
 ect neural nets to be generically interpretable?\n\nCONTACT: 262-4196\, di
 eter@cs.wisc.edu
LOCATION:1240 Computer Sciences
SUMMARY:AI Safety and Theoretical Computer Science
END:VEVENT
END:VCALENDAR

