This is not much a technical question but more a reflexion about the subject.
REST have democtratized a good way of serving resources using HTTP protocols and let developper do the cleanest project by splitting resources and user interface (back-end now really deals about back-end only with the use of REST APIs).
About those API, most of the time we use GET, PUT/PATCH (hmm meh ?), POST and DELETE, both mimic CRUD database.
But as time spent on our projects goes by, we feel the UX can be improved by adding tons of great features. For example, why should the user feel afraid of deleting a resource ? Why do not just put a recovery system (like we can see in Google Keep app that let us undo a deletion which I think is awesome in term of UX).
One of the practices that preventing unintentionnal deletion is the use of a column in the table that represents the resource. For example, I am about to delete a book, so by clicking the delete button I will only flag this row as "deleted = TRUE" in my database, and prevent displaying the rows that are deleted when browsing the list of resource (GET).
This last comes in conflict with our dear and loved REST pattern, as there is no distinction between DELETE and DESTROY "methods".
What I mean is, should we think about making REST evolving to our UX needs, so by that I mean also making HTTP protocols evolving, or should this stays as a puristic resource management and we should instead follow HTTP protocol without trying to bother it and just adapt to it using workaround (like using PATCH for soft deletion) ?
Personnaly I would like to see at least 4 new protocols as we are trying to qualify a resource as good as possible :
I'm not sure this makes sense even within the web development realm; the starting premises seem to be wrong.
RFC 7231 offers this explanation for POST
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
Riddle: if this is the official definition of POST, why do we need GET? Anything that the target can do with GET can also be done with POST.
The answer is that the additional constraints on GET allow participants to make intelligent decisions using only the information included in the message.
For example, because the header data informs the intermediary component that the method is GET, the intermediary knows that the action upon receiving the message is safe, so if a response is lost the message can be repeated.
The entire notion of crawling the web depends upon the fact that you can follow any safe links to discover new resources.
The browser can pre-fetch representations, because the information encoded in the link tells it that the message to do so is safe.
The way that Jim Webber describes it: "HTTP is an application, but it's not your application". What the HTTP specification does is define the semantics of messages, so that a generic client can be understood by a generic server.
To use your example; the API consumer may care about the distinction between delete and destroy, but the browser doesn't; the browser just wants to know what message to send, what the retry rules are, what the caching rules are, how to react to various error conditions, and so on.
That's the power of REST -- you can use any browser that understands the media-types of the representations, and get correct behavior, even though the browser is completely ignorant of the application semantics.
The browser doesn't know that it is talking to an internet message board or the control panel of a router.
In summary: your idea looks to me as though you are trying to achieve richer application semantics by changing the messaging semantics; which violates separation of concerns.