동영상 미리보기
프리세일즈 도큐멘토 | 정부지원 나라장터 입찰 제안서 및 실무 기획서 등 제공
문서는 포멧만으로도 가이드가 된다, 문서에서 받는 멘토링은 사수보다 많다
---
아마란스 | 682-53-00808 | 제2023-수원권선-0773호
출판사 신고번호 : 제 2023-000074호
경기도 광명시 소하로 190, 12층 비1216-50(소하동, 광명G타워)
전화번호 : 010-3284-6979 (11:00 ~ 16:00) , 주말 / 공휴일 휴무
이 메 일 : sales@amarans.co.kr
입금계좌 : 카카오뱅크, 아마란스, 3333-26-7731937
This is an activation Atlas. It gives us a glimpse into the high-dimensional embedding spaces modern AI models use to organize and make sense of the world. The first model to really see the world like this, AlexNet, was published in 2012 in an 8-page paper that shocked the computer vision community by showing that an old AI idea would work unbelievably well when scaled. The paper's second author, Ilya HK, would go on to co-found OpenAI where he and the OpenAI team would massively scale up this idea again to create ChatGPT. This video is sponsored by KiwiCo; more on them later. If you look under the hood of ChatGPT, you won't find any obvious signs of intelligence; instead, you'll find layer after layer of compute blocks called. Transformers, this is what the T and GPT stand for. Each Transformer performs a set of fixed matrix operations on an input matrix of data and typically returns an output matrix of the same size. To figure out what it's going to say next, Chat GPT breaks apart what you ask, gets into words and word fragments, maps each of these to a vector, and stacks all of these vectors together into a matrix. This matrix is then passed into the first Transformer block, which returns a new matrix of the same size. This operation is then repeated again and again, 96 times in Chat GPT 3.5 and reportedly 120 times in Chat GPT 4. Now here's the absurd part, with a few caveats. The next word or word fragment that Chat GPT says back to you is literally just the..
--------------------------------------------------------
바로가기 (새창) : https://youtu.be/UZDiGooFs54
도큐멘토에서는 일부 내용만을 보여드리고 있습니다.
세부적인 내용은 바로가기로 확인하시면 됩니다.