What you are doing is opposite of open. It is unfortunate that you hype up +propagate fear + thwart reproducibility+scientific endeavor. There is active research from other groups in unsupervised language models. You hype it up like it has never been done before. @jackclarkSF
-
-
-
Hi there. We've released a paper, samples, and small model. Not our intention to hype our research relative to others (DM me specifics here and can probably help?). How do you think about intersection of increasingly general models and potential for malicious use?
-
-
Hiya, here are some malicious use cases (outlined in the blog). What do you think?pic.twitter.com/HwWu3Vxma5
-
This is exactly hype and misleading the media. Where is any evidence that your system is actually capable of doing this? Which independent researchers have scrutinized your system? None. Media people don't count.
-
Thanks for continuing discussion - the conditional news samples are pretty striking and have been fairly consistent. I feel like journalists are the appropriate experts to consult with regard to the news aspects. Talked to a few governments re disinformation and got validation
-
Wrong. If you think it is truly capable, you will open it up to researchers. Not media people eagerly looking for a clickbait. incentives are misaligned. How can govts (in plural) validate? Who is Govt? politicians? Administrators? Where are researchers in all this?
-
"Wrong." doesn't make it easy to have this discussion - pretty sure this isn't that black&white. Governments have researchers inside them, happy to chat more IRL re that stuff. I don't understand the "truly capable, you will open it up to researchers" - expand?
- 14 more replies
New conversation -
-
-
I’ve got a 15 page paper due in 2 weeks, any chance of a release soon?
- End of conversation
New conversation -
-
-
Hey
@jackclarkSF I've read the charter and all, but if you guys are 'already' closing off your research, you might as well call yourselves AIGatekeeper or something. -
Hi there, the main thing for us here is not enabling malicious or abusive uses of the technology. We've published a research paper and small model. Very tough balancing act for us. Would love thoughts to the email address listed in post, or here.
-
Isn't the idea of OpenAI to democratize AI? Supposedly giving everyone "AI" is better than keeping it in hands of a few. Did you made a U-turn on this? Because ironically OpenAI would be the bad guy according to previous reasoning. BTW - That reasoning was always dubious anyway.
-
The idea of OpenAI is to make sure all of humanity benefits from long-term powerful / AGI-scale AI systems. As part of that, we need better policies around the distribution of increasingly transformative tech. This project is part of that
-
But if goal is supposed to be achieved by centralized elite who decide what will 'benefit all humanity', it may not work well. Governments can combat bad actors, while public benefis from tech. Think PC/internet revolution. Sure, virus id theft etc, but overall, benefit was huge.
End of conversation
New conversation -
-
-
Any plans to make a limited-bandwidth web demo version? That unicorn story is astonishing (and I'd be interested to see what the 9 rejected unicorn stories were like).
-
Hi Janelle! We considered this + other ways for allowing wider access - one concern around a limited-bandwidth version is that people might use multiple accounts/IP addresses to circumvent limits. Happy to discuss more - see email in post, also miles @ openai dot com
-
This is astonishing work. If only there a were corpus of question-and-response text you could use to train a chatbot with this level of credibility. You know, short back-and-forth messages between many people, publicly available.
- 1 more reply
New conversation -
-
-
"Due to our concerns about malicious applications of the technology, we are not releasing the trained model." Can you elaborate? How is it different than any other LM that was released?
-
One of the main things is the length and (frequent) coherence of the generated text. It seems qualitatively easier to generate malicious/faked content
-
Does it mean that once algorithms are good enough they should not be released to the public? I agree that there is a risk, a risk that exists in almost any innovation imo.
-
3D printers for example - any one can buy them although they can be used to produce guns (unfortunately)
-
I mean, actually with ink printers there's a ton of tech in them to make sure you can't print fake money, and with 3D printers there are various things to actually prevent you from printing various things (especially re certain types of weaponry) at certain tolerances
-
So these are a good example of another tech domain where people are thinking about ways to apply controls that are inherent to the tech factory (printer/model/etc) in question. Not fullproof and quite difficult, but they're trying
-
Agree, it's a complicated issue. What risks do you see in generating fake content?
-
You also need to take into account that bad actors might have the resources to reproduce your model while most good actors (the public) don't.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.