Xforex trade online review42 comments
5 point cboe binary option spx trading beginner winning numbers
Simply because we are, to use the technical term, suckers. Not always as individuals, but certainly collectively. The defining attribute of the sucker is, of course, an inability to learn from experience. And it seems that meaningfully learning from our mistakes is a foreign concept to us. Nay, it is anathema. The darkest heresy imaginable. Something no one would bring up in polite company.
Something only spoken of by rabid crackpots, on their lunatic-fringe blogs, during full moon. We will happily savor the same snake oils again and again, every time the same non-solutions to the same non-problems — because we refuse to learn from the past. And much of the history of personal computing can only be understood in light of this fact.
For instance, we appear to have learned nothing from the GIF debacle. Unisys tried to use software patents to impose a tax on all Internet users, and everyone jumped ship from GIF to other graphics formats — ones supposedly out of the reach of patent trolls. As though anything could be safe from the well-funded and litigious while software patents remain legal. So nearly everyone switched to PNG and the like, and the storm died down. And now format wars rage once more — this time over video codecs.
Patent trolls smell the blood and fear of lucrative, juicy prey: Web users and content providers live in terror, dreading the day when they will have to switch video codecs. As we all know, this is an exceedingly unpleasant process. First, the web browser or server must be lifted on hydraulic jacks. Then, its hood is opened, and greasy mechanics will grimly crank the codec hoist, lifting the old video engine out from its moorings. The vacant compartment must be scrubbed clean of black, sooty HTTP residue before the new codec can be winched into place.
Go out and get a disinfectant spray. If you want altered functionality, someone must physically replace the shafts and gears! The core idiocy of all web format wars lies in the assumption that there must necessarily be a pre-determined, limited set of formats permanently built into a web browser. And the fact that it is nonsense should have been obvious from the beginning, because the idiocy of laboriously-standardized data formats was obvious half a century ago — long before interactive personal computing:.
I just want to tell them to you quickly. I was in the Air Force in , and I saw it in , and it probably goes back one year before. Air training command had to send tapes of many kinds of records around from Air Force base to Air Force base. All you had to do [to] read a tape back in , was to read the front part of a record—one of these big records—into core storage, and start jumping indirect through the pointers, and the procedures were there.
HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats. I suspect Microsoft is in the latter camp here.
You just read it in. But basically, you want to be able to distribute all of the knowledge of all the things that are there, and in fact, the Internet is starting to move in that direction as people discover ever more complex HTML formats, ever more intractable. This is one of these mistakes that has been recapitulated every generation. Why exactly does a browser need to ship with any preconceived notions of how to decode video and graphics?
Or audio, or text, for that matter? It is, after all, running on something called a programmable computer. Which is why it is never , ever done! Something not unlike a competently written, non-user-hostile incarnation of Adobe Flash. It goes without saying that this would be a far easier sell were we using a non-braindead CPU architecture — one where buffer overflows and the like are physically impossible.
There is, however, no reason why it could not be built on top of existing systems by competent hands. As for the question of hardware accelerators: FPGA s have become so cheap that there is simply no reason to ship a non-reprogrammable video or audio decoder ever again.
Why pay royalties and fatten patent trolls? Let the act of loading the decoder algorithm — whether a sequence of instructions for a conventional CPU, or an FPGA bitstream — be co-incident with the act of loading the media file to be played. The latter will contain the codec or a hash thereof, for cache lookup as a header.
At present, working around a software patent is difficult only because switching formats takes considerable work and requires some unusual action on the part of variably-literate users. Think of it this way: How is this ideal compatible with your earlier assertions about computer insecurity: Anyhow, I think Curl language is close to your proposed ideal… except that the implementation is proprietary.
So you would have to part with the PC architecture, replacing it with something rather different. And how exactly do you propose to prevent someone from packaging their software as an incomprehensible blob? Have you discovered a programming language in which it is impossible to write obfuscated code? Of course anyone can choose to write obfuscated code.
It is an ethical problem, just like certain others I have discussed. There is some precedent after all: My experience is that all non-trivial software is essentially incomprehensible to essentially all people. It takes far more effort than most people are willing to invest to improve on that. The fact that you frame Turing-completeness as something that makes a piece of software simpler to maintain than a physical machine betrays either some kind of confusion about fundamental concepts in the theory of computability, or ignorance of the complexity of production software, or both.
Now, find the bug in this code you just downloaded:. This is just an example. The actual code you downloaded was probably a different lines written by someone else to do something else.
So, how long do you think it will take to find? Or should we simply hold that there is no reason to believe there is a bug in it, until someone notices it, and the damage is already done? That sounds a lot like where we are now. Good luck convincing them that they should use their time to exert political and economic pressure towards making large-scale changes to the computer industry that would finally put the tools that they have no use for into their hands.
If proprietary plug-ins for existing browsers are your cup of tea, why not use Adobe Flash? It is not my cup of tea. I mention Curl because it is closer to your vision. You can also use a Curl app, that is simply a box that downloads, compiles, and executes Curl code, which seems closer to your vision above. Curl has other properties that make it worth considering.
It supports limited composition — e. Curl app inside a Curl app, via sandboxing. But ars longa vita brevis …. I may or may not have confused Oberon with Inferno.
Once again we need a declarative format. Have the CPU execute S-expressions directly, as discussed in my other posts. These are nicely searchable. No Formats, no Format Wars: And how exactly does search indexing work in your hypothetical utopia? Do I have to run the program and make sure I somehow hit every possible output state?
An HTML page is a program — written in a declarative language. That allows it to also be used as a data format — which give us things like search and simple authoring.
There is some anticipated environment that the IO can manipulate e. Presumably, the environment could be virtualized for security, same as we currently run OS-inside-OS. While this allows a lot of flexibility for content, it also makes the content opaque to further modification, transclusion, stylization e. It would be extra-difficult to add subtitles to a video, modify for a mobile phone, translate a page of text presented as a 2D canvas, annotate a video stream with meta-content such as time and geographic information and named content e.