Article on popular science: prompt injection attacks are a new risk associated with the interation if llms into other services. •prompt injection attacks imply the use of a prompt that bypass safety restrictions of a given ai / llm, which cannot differentiate between illicit instructions and inputs. •a proper prompt injection attacks can thus use an assistant to interact with a service and complete a sets of instructions. I’d like to hear what you think about this
You must log in or # to comment.