رایانه لپ تاپ 8 بیتی قطعاً محدود است. اما آیا به اندازه کافی قادر است حتی یک کامپیوتر باشد؟ در این ویدیو بررسی می کنیم که چگونه Turing Machines و Lambda Calculus کل کلاس "مشکلات قابل محاسبه" را تعریف می کنند. و ما در مورد تغییر نسبتاً جزئی مورد نیاز برای تکمیل رایانه رومیزی 8 بیتی Turing صحبت می کنیم.

رایانه 8 بیتی بیشتر:

از من در پاترون حمایت کنید:

——————

رسانه های اجتماعی:
سایت اینترنتی:
توییتر:
پاترون:
Reddit:

لینک دانلود

30 پاسخ به “ساخت کامپیوتر Turing کامل است”

  1. In Technicality if you can jump and you can move memory around your machine is Turing complete. Take a look at a project called the Movfuscator.
    In short the Movfuscator was a project to show that any program can be written with only a series of mov instructions (or in this computer's case loadA/StoreA instructions) With the caveat that there was but one jmp instruction to bring the execution back to the beginning.
    And if you at first aren't convinced, consider for a second that this computer already utilizes the Turring completeness of memory in using it's eeprom as a hex decoder. Memory can emulate any discrete logical element and so if you can move that memory around and have the right program loaded you can build a Turring machine with it. So no you don't technically need a conditional jump instruction but it would cut down on the execution time of general programs by an exponential amount.

  2. the Computer Architecture im designing kind of has this in common, very similar to how my binary executabls will be, but not quite. but watching this, 9 min so far helped me organize my thoughts a bit. Reading this paper may help a bit more, where to get it?
    Doing all this from the ground up, not taking inspiration from anything except my self. some of my design that (i) invented, telling a good friend, he says thats how (that) is already done, but some things are kind of different, and some totally different, Like for each byte ((10 bits not 8 bits, still may change)) of storage has kind of a miniaturize Specialized ALU connected to each byte also joined with the before and after byte. yes Lots of overlapping. i ""almost"" dont need the standard definition of what you would consider a core to be. Also my cores are "virtually" reconfigurable, simulating a wetware computer… i had to say it that way but actually its not virtual, and its not interpreted either, as in im designing this from the ground up. SO i dont have a CPU or a GPU in those regards, its that different. but in those reguards the "software, OS and Applications" tells the "cores" how to work together.
    So if you really wanted to think of it this way… im running my entire OS on a GPU… but really not, saying that just to give you an idea of the power it has. So yes my software can modify itself if its told how to do so. IT WONT HAPPEN ACCIDENTALLY,,, BUT IF SOMEONE WANTED TO MAKE AN ARTIFICIAL INTELLIGENCE WITH THE PURPOSE OF WAKING UP… THEN THEY WILL WANT TO DO IT ON WHAT IM DESIGNING. by the way im going to want and need help making applications for my Computer/OS, example i dont have the knowledge to make a 3D modeling / Rendering environment, For like a 3D shooter up game, Or 3D printing type suit of software (really thow, lets make all the current software sets needed for 3D printing start to finish all in one single app, come on with that bleep people). Also anything security related on my setup is going to be extremely difficult because of the nature of my code. All possible "CPU" instructions on the lowest level of hardware are a single byte, and the layout to structure of all the bytes follows a tree-view type logic, So it is "compiled" as you write and its more like a script… you could almost probably say interpreted but its not cause there is no overhead and its done at hardware level. So the OS and any apps are more OpenSource than a true GNU Linux setup. My IDE has a few tools to help you strcture the bytes(code) in a tree-view layout instead of one big body of text. the most important role of my IDE is converting the Bytes to very descriptive lables in the tree-view, and then back to bytes. Its so open (which creates security risk) that lets say you playing a 3D shootem up game, and you have the most powerful weapon in the game but it takes about 5 second to recharge so you can fire again but you want to shoot it like a machine gun… while your game it running open it up in the IDE Environment find the code responsible for controlling the wait to fire and lower it… if you dont change the filesize, keep it the same, the changes are instantly applied! with out closing the game… if you dont change its file size

  3. I've never built my own computer, and probably will never find the time to do so…. But I almost feel like I've had the experience of doing so, just by watching your videos. Your presentation is clear and compelling, and has given me a much better understanding of what's actually going-on "down-deep" in a computer's circuitry… things that I've mostly already understood at an abstract level but always wanted to understand all the way down, in detail, to the physical level. Thank you for making these videos, and I definitely look forward to more.

  4. The MOV instruction is Turing-complete.
    Because it's like 'the thing that moves over the Tape'. for example MOV(source,destination)
    So while working at a conveyor belt in a factory, i'd really be a bit of a computer… 😉

  5. I got into hardware designs for OISC (One Instruction Set Computers) recently. It's both fun and frustration in equal measure ; )

    I did manage to built my own Turing-complete OISC computer based upon the a single SUBLEQ instruction, which is a well known OISC design. Although, I'd argue that once you're down to a single-instruction it is no longer an instruction at all – so a single instruction computer is actually a zero-instruction computer. In the same way that a rock always behaves as a rock, we wouldn't say it is following some "be a rock" instruction – it just is.

    Being a SUBLEQ machine all mine can ever do for conditional flow control is subtract 1 from an address, and then jump to a second specified address if zero. Trying to write code for it is pretty mindbending, but it IS definitely Turing-complete. I've written quite a few routines for SUBLEQ so far, from addition and subtraction, to modular multiplication and division… and my final code was to list primes up to 255, which was shockingly complex.

    SUBLEQ is a sort of "reductio ad absurdum" of computation.

    I'm currently working on a design for a register based OISC called TRACiE (Touch Register And Continue if Equal) … which is an extension of SUBLEQ. TRACiE can "touch" registers up (+1) or down (-1) based on a direction bit (like a combined INC / DEC instruction) but on DECing a zero it causes an "exception" which jumps to supplied address parameter. Only touching down though, touching up never causes an exception. Unlike the SUBLEQ machine, TRACiE doesn't need RAM – only registers, so she works on streams. TRACiEs can be daisychained.

    To establish IO for TRACiE I permitted myself two virtual registers, READ (register 14) and WRITE (register 15) which hold no value, and a special register DATA (register 0) which behaves as normal but is an IO target. When you touch READ, it copies a byte from standard input to DATA and when you touch WRITE it writes the DATA register to standard output. By so doing, my new OISC is able to process streams and can join a chain of processors. It's all pretty fascinating stuff when you get into minimalist processing systems.

    I'm in the middle of getting rid of READ and WRITE and just having a single STEPIO register. Touched up it writes to the ostream and touched down it reads from the istream (raising an exception on EOF) This is more in keeping with the nature of TRACiEs nature – where only 'Touch'ing a register downwards can alter flow control.

    Making hardware OISCs is fun : )

    Programming an OISC is truly migraine-inducing stuff though. Of course, no matter how difficult a computer is to program, we can produce libraries and compilers such that we can eventually code in a higher level language. So, there's no reason why an OISC couldn't be programmed in a C-like paradigm. I have not yet got as far as creating a proper cross-compiler, but I can now at least appreciate how it is possible given an OISC with enough register/memory space.

    I'm trying to avoid adding a stack to TRACiE as I'm enjoying the challenges of pure register-driven OISC, but a PUSHPOP virtual register would be the obvious choice to allow for function isolation, permitting more complex code to be written without getting bogged down in register tracking/management tasks. But it just feels a little too much like "instructions via the back door"

    I guess my next challenge is to get into compiler design ; )

  6. I am quite sure from when I was studying computer sciences that we where told that modern computers doesn't have instructions for divisions, multiplying and sustracting. It is all done with adding, so at least I consider yours a complete computer

  7. Since our brains don't have infinite memory either, are we also technically not turing-complete computing machines?

    However a human with a an infinite supply of pens and an infinite supply of paper would be, if storage is the only real obstacle there. The human just acts on the pointer and generates their own set of instructions based on received input.
    Thanks for this, really helped my understanding of Turing machines and what it actually means to be a Turing complete machine.

  8. So we can build an entire CPU by building multiple boards that each represent 1 cpu core??
    Why not build an entire PC that can play Crysis trilogy, we can add 256GB of ram, so we can run the entire OS in a RamDisk.

نظرات بسته شده اند.