/[gxemul]/trunk/src/memory_rw.c
This is repository of my old source code which isn't updated any more. Go to git.rot13.org for current projects!
ViewVC logotype

Annotation of /trunk/src/memory_rw.c

Parent Directory Parent Directory | Revision Log Revision Log


Revision 28 - (hide annotations)
Mon Oct 8 16:20:26 2007 UTC (16 years, 7 months ago) by dpavlin
File MIME type: text/plain
File size: 15142 byte(s)
++ trunk/HISTORY	(local)
$Id: HISTORY,v 1.1298 2006/07/22 11:27:46 debug Exp $
20060626	Continuing on SPARC emulation (beginning on the 'save'
		instruction, register windows, etc).
20060629	Planning statistics gathering (new -s command line option),
		and renaming speed_tricks to allow_instruction_combinations.
20060630	Some minor manual page updates.
		Various cleanups.
		Implementing the -s command line option.
20060701	FINALLY found the bug which prevented Linux and Ultrix from
		running without the ugly hack in the R2000/R3000 cache isol
		code; it was the phystranslation hint array which was buggy.
		Removing the phystranslation hint code completely, for now.
20060702	Minor dyntrans cleanups; invalidation of physpages now only
		invalidate those parts of a page that have actually been
		translated. (32 parts per page.)
		Some MIPS non-R3000 speed fixes.
		Experimenting with MIPS instruction combination for some
		addiu+bne+sw loops, and sw+sw+sw.
		Adding support (again) for larger-than-4KB pages in MIPS tlbw*.
		Continuing on SPARC emulation: adding load/store instructions.
20060704	Fixing a virtual vs physical page shift bug in the new tlbw*
		implementation. Problem noticed by Jakub Jermar. (Many thanks.)
		Moving rfe and eret to cpu_mips_instr.c, since that is the
		only place that uses them nowadays.
20060705	Removing the BSD license from the "testmachine" include files,
		placing them in the public domain instead; this enables the
		testmachine stuff to be used from projects which are
		incompatible with the BSD license for some reason.
20060707	Adding instruction combinations for the R2000/R3000 L1
		I-cache invalidation code used by NetBSD/pmax 3.0, lui+addiu,
		various branches followed by addiu or nop, and jr ra followed
		by addiu. The time it takes to perform a full NetBSD/pmax R3000
		install on the laptop has dropped from 573 seconds to 539. :-)
20060708	Adding a framebuffer controller device (dev_fbctrl), which so
		far can be used to change the fb resolution during runtime, but
		in the future will also be useful for accelerated block fill/
		copy, and possibly also simplified character output.
		Adding an instruction combination for NetBSD/pmax' strlen.
20060709	Minor fixes: reading raw files in src/file.c wasn't memblock
		aligned, removing buggy multi_sw MIPS instruction combination,
		etc.
20060711	Adding a machine_qemu.c, which contains a "qemu_mips" machine.
		(It mimics QEMU's MIPS machine mode, so that a test kernel
		made for QEMU_MIPS also can run in GXemul... at least to some
		extent.)  Adding a short section about how to run this mode to
		doc/guestoses.html.
20060714	Misc. minor code cleanups.
20060715	Applying a patch which adds getchar() to promemul/yamon.c
		(from Oleksandr Tymoshenko).
		Adding yamon.h from NetBSD, and rewriting yamon.c to use it
		(instead of ugly hardcoded numbers) + some cleanup.
20060716	Found and fixed the bug which broke single-stepping of 64-bit
		programs between 0.4.0 and 0.4.0.1 (caused by too quick
		refactoring and no testing). Hopefully this fix will not
		break too many other things.
20060718	Continuing on the 8253 PIT; it now works with Linux/QEMU_MIPS.
		Re-adding the sw+sw+sw instr comb (the problem was that I had
		ignored endian issues); however, it doesn't seem to give any
		big performance gain.
20060720	Adding a dummy Transputer mode (T414, T800 etc) skeleton (only
		the 'j' and 'ldc' instructions are implemented so far). :-}
20060721	Adding gtreg.h from NetBSD, updating dev_gt.c to use it, plus
		misc. other updates to get Linux 2.6 for evbmips/malta working
		(thanks to Alec Voropay for the details).
		FINALLY found and fixed the bug which made tlbw* for non-R3000
		buggy; it was a reference count problem in the dyntrans core.
20060722	Testing stuff; things seem stable enough for a new release.

==============  RELEASE 0.4.1  ==============


1 dpavlin 2 /*
2 dpavlin 22 * Copyright (C) 2003-2006 Anders Gavare. All rights reserved.
3 dpavlin 2 *
4     * Redistribution and use in source and binary forms, with or without
5     * modification, are permitted provided that the following conditions are met:
6     *
7     * 1. Redistributions of source code must retain the above copyright
8     * notice, this list of conditions and the following disclaimer.
9     * 2. Redistributions in binary form must reproduce the above copyright
10     * notice, this list of conditions and the following disclaimer in the
11     * documentation and/or other materials provided with the distribution.
12     * 3. The name of the author may not be used to endorse or promote products
13     * derived from this software without specific prior written permission.
14     *
15     * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
16     * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17     * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18     * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
19     * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
20     * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
21     * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
22     * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
23     * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
24     * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
25     * SUCH DAMAGE.
26     *
27     *
28 dpavlin 28 * $Id: memory_rw.c,v 1.93 2006/07/14 16:33:27 debug Exp $
29 dpavlin 2 *
30     * Generic memory_rw(), with special hacks for specific CPU families.
31     *
32     * Example for inclusion from memory_mips.c:
33     *
34     * MEMORY_RW should be mips_memory_rw
35     * MEM_MIPS should be defined
36     */
37    
38    
39     /*
40     * memory_rw():
41     *
42     * Read or write data from/to memory.
43     *
44     * cpu the cpu doing the read/write
45     * mem the memory object to use
46     * vaddr the virtual address
47     * data a pointer to the data to be written to memory, or
48     * a placeholder for data when reading from memory
49     * len the length of the 'data' buffer
50     * writeflag set to MEM_READ or MEM_WRITE
51 dpavlin 20 * misc_flags CACHE_{NONE,DATA,INSTRUCTION} | other flags
52 dpavlin 2 *
53     * If the address indicates access to a memory mapped device, that device'
54     * read/write access function is called.
55     *
56     * This function should not be called with cpu == NULL.
57     *
58     * Returns one of the following:
59     * MEMORY_ACCESS_FAILED
60     * MEMORY_ACCESS_OK
61     *
62     * (MEMORY_ACCESS_FAILED is 0.)
63     */
64     int MEMORY_RW(struct cpu *cpu, struct memory *mem, uint64_t vaddr,
65 dpavlin 20 unsigned char *data, size_t len, int writeflag, int misc_flags)
66 dpavlin 2 {
67 dpavlin 12 #ifdef MEM_ALPHA
68     const int offset_mask = 0x1fff;
69     #else
70     const int offset_mask = 0xfff;
71     #endif
72    
73 dpavlin 2 #ifndef MEM_USERLAND
74     int ok = 1;
75     #endif
76     uint64_t paddr;
77     int cache, no_exceptions, offset;
78     unsigned char *memblock;
79 dpavlin 22 int dyntrans_device_danger = 0;
80 dpavlin 12
81 dpavlin 20 no_exceptions = misc_flags & NO_EXCEPTIONS;
82     cache = misc_flags & CACHE_FLAGS_MASK;
83 dpavlin 2
84 dpavlin 4 #ifdef MEM_X86
85 dpavlin 6 /* Real-mode wrap-around: */
86 dpavlin 20 if (REAL_MODE && !(misc_flags & PHYSICAL)) {
87 dpavlin 6 if ((vaddr & 0xffff) + len > 0x10000) {
88     /* Do one byte at a time: */
89 dpavlin 22 int res = 0;
90     size_t i;
91 dpavlin 6 for (i=0; i<len; i++)
92     res = MEMORY_RW(cpu, mem, vaddr+i, &data[i], 1,
93 dpavlin 20 writeflag, misc_flags);
94 dpavlin 6 return res;
95     }
96     }
97 dpavlin 4
98 dpavlin 6 /* Crossing a page boundary? Then do one byte at a time: */
99 dpavlin 20 if ((vaddr & 0xfff) + len > 0x1000 && !(misc_flags & PHYSICAL)
100 dpavlin 6 && cpu->cd.x86.cr[0] & X86_CR0_PG) {
101     /* For WRITES: Read ALL BYTES FIRST and write them back!!!
102     Then do a write of all the new bytes. This is to make sure
103     than both pages around the boundary are writable so we don't
104     do a partial write. */
105 dpavlin 22 int res = 0;
106     size_t i;
107 dpavlin 6 if (writeflag == MEM_WRITE) {
108     unsigned char tmp;
109     for (i=0; i<len; i++) {
110     res = MEMORY_RW(cpu, mem, vaddr+i, &tmp, 1,
111 dpavlin 20 MEM_READ, misc_flags);
112 dpavlin 6 if (!res)
113 dpavlin 4 return 0;
114 dpavlin 6 res = MEMORY_RW(cpu, mem, vaddr+i, &tmp, 1,
115 dpavlin 20 MEM_WRITE, misc_flags);
116 dpavlin 6 if (!res)
117     return 0;
118     }
119     for (i=0; i<len; i++) {
120     res = MEMORY_RW(cpu, mem, vaddr+i, &data[i], 1,
121 dpavlin 20 MEM_WRITE, misc_flags);
122 dpavlin 6 if (!res)
123     return 0;
124     }
125     } else {
126     for (i=0; i<len; i++) {
127     /* Do one byte at a time: */
128     res = MEMORY_RW(cpu, mem, vaddr+i, &data[i], 1,
129 dpavlin 20 writeflag, misc_flags);
130 dpavlin 6 if (!res) {
131     if (cache == CACHE_INSTRUCTION) {
132     fatal("FAILED instruction "
133     "fetch across page boundar"
134     "y: todo. vaddr=0x%08x\n",
135     (int)vaddr);
136     cpu->running = 0;
137     }
138     return 0;
139 dpavlin 4 }
140     }
141     }
142 dpavlin 6 return res;
143 dpavlin 4 }
144 dpavlin 6 #endif /* X86 */
145 dpavlin 4
146 dpavlin 2
147     #ifdef MEM_USERLAND
148 dpavlin 12 #ifdef MEM_ALPHA
149     paddr = vaddr;
150     #else
151 dpavlin 2 paddr = vaddr & 0x7fffffff;
152 dpavlin 12 #endif
153 dpavlin 24 #else /* !MEM_USERLAND */
154 dpavlin 26 if (misc_flags & PHYSICAL || cpu->translate_v2p == NULL) {
155 dpavlin 2 paddr = vaddr;
156     } else {
157 dpavlin 26 ok = cpu->translate_v2p(cpu, vaddr, &paddr,
158 dpavlin 2 (writeflag? FLAG_WRITEFLAG : 0) +
159     (no_exceptions? FLAG_NOEXCEPTIONS : 0)
160 dpavlin 6 #ifdef MEM_X86
161 dpavlin 20 + (misc_flags & NO_SEGMENTATION)
162 dpavlin 6 #endif
163 dpavlin 14 #ifdef MEM_ARM
164 dpavlin 20 + (misc_flags & MEMORY_USER_ACCESS)
165 dpavlin 14 #endif
166 dpavlin 2 + (cache==CACHE_INSTRUCTION? FLAG_INSTR : 0));
167     /* If the translation caused an exception, or was invalid in
168     some way, we simply return without doing the memory
169     access: */
170     if (!ok)
171     return MEMORY_ACCESS_FAILED;
172     }
173    
174    
175 dpavlin 6 #ifdef MEM_X86
176     /* DOS debugging :-) */
177 dpavlin 20 if (!quiet_mode && !(misc_flags & PHYSICAL)) {
178 dpavlin 6 if (paddr >= 0x400 && paddr <= 0x4ff)
179     debug("{ PC BIOS DATA AREA: %s 0x%x }\n", writeflag ==
180     MEM_WRITE? "writing to" : "reading from",
181     (int)paddr);
182     #if 0
183     if (paddr >= 0xf0000 && paddr <= 0xfffff)
184     debug("{ BIOS ACCESS: %s 0x%x }\n",
185     writeflag == MEM_WRITE? "writing to" :
186     "reading from", (int)paddr);
187     #endif
188     }
189     #endif
190 dpavlin 24 #endif /* !MEM_USERLAND */
191 dpavlin 6
192 dpavlin 2
193     #ifndef MEM_USERLAND
194     /*
195     * Memory mapped device?
196     *
197 dpavlin 22 * TODO: if paddr < base, but len enough, then the device should
198     * still be written to!
199 dpavlin 2 */
200     if (paddr >= mem->mmap_dev_minaddr && paddr < mem->mmap_dev_maxaddr) {
201     uint64_t orig_paddr = paddr;
202 dpavlin 22 int i, start, end, res;
203 dpavlin 4
204     /*
205     * Really really slow, but unfortunately necessary. This is
206     * to avoid the folowing scenario:
207     *
208     * a) offsets 0x000..0x123 are normal memory
209     * b) offsets 0x124..0x777 are a device
210     *
211     * 1) a read is done from offset 0x100. the page is
212 dpavlin 22 * added to the dyntrans system as a "RAM" page
213     * 2) a dyntranslated read is done from offset 0x200,
214 dpavlin 4 * which should access the device, but since the
215     * entire page is added, it will access non-existant
216     * RAM instead, without warning.
217     *
218 dpavlin 22 * Setting dyntrans_device_danger = 1 on accesses which are
219 dpavlin 4 * on _any_ offset on pages that are device mapped avoids
220     * this problem, but it is probably not very fast.
221 dpavlin 22 *
222     * TODO: Convert this into a quick (multi-level, 64-bit)
223     * address space lookup, to find dangerous pages.
224 dpavlin 4 */
225 dpavlin 22 #if 1
226 dpavlin 12 for (i=0; i<mem->n_mmapped_devices; i++)
227     if (paddr >= (mem->dev_baseaddr[i] & ~offset_mask) &&
228 dpavlin 18 paddr <= ((mem->dev_endaddr[i]-1) | offset_mask)) {
229 dpavlin 22 dyntrans_device_danger = 1;
230 dpavlin 12 break;
231     }
232 dpavlin 22 #endif
233 dpavlin 4
234 dpavlin 22 start = 0; end = mem->n_mmapped_devices - 1;
235     i = mem->last_accessed_device;
236 dpavlin 2
237     /* Scan through all devices: */
238     do {
239     if (paddr >= mem->dev_baseaddr[i] &&
240 dpavlin 18 paddr < mem->dev_endaddr[i]) {
241 dpavlin 2 /* Found a device, let's access it: */
242     mem->last_accessed_device = i;
243    
244     paddr -= mem->dev_baseaddr[i];
245     if (paddr + len > mem->dev_length[i])
246     len = mem->dev_length[i] - paddr;
247    
248 dpavlin 12 if (cpu->update_translation_table != NULL &&
249 dpavlin 20 !(ok & MEMORY_NOT_FULL_PAGE) &&
250     mem->dev_flags[i] & DM_DYNTRANS_OK) {
251 dpavlin 2 int wf = writeflag == MEM_WRITE? 1 : 0;
252 dpavlin 18 unsigned char *host_addr;
253 dpavlin 2
254 dpavlin 18 if (!(mem->dev_flags[i] &
255 dpavlin 20 DM_DYNTRANS_WRITE_OK))
256 dpavlin 18 wf = 0;
257    
258     if (writeflag && wf) {
259 dpavlin 2 if (paddr < mem->
260 dpavlin 12 dev_dyntrans_write_low[i])
261 dpavlin 2 mem->
262 dpavlin 12 dev_dyntrans_write_low
263     [i] = paddr &
264     ~offset_mask;
265     if (paddr >= mem->
266     dev_dyntrans_write_high[i])
267 dpavlin 2 mem->
268 dpavlin 12 dev_dyntrans_write_high
269     [i] = paddr |
270     offset_mask;
271 dpavlin 2 }
272    
273 dpavlin 18 if (mem->dev_flags[i] &
274 dpavlin 20 DM_EMULATED_RAM) {
275 dpavlin 18 /* MEM_WRITE to force the page
276     to be allocated, if it
277     wasn't already */
278     uint64_t *pp = (uint64_t *)
279     mem->dev_dyntrans_data[i];
280     uint64_t p = orig_paddr - *pp;
281     host_addr =
282     memory_paddr_to_hostaddr(
283 dpavlin 28 mem, p & ~offset_mask,
284     MEM_WRITE);
285 dpavlin 18 } else {
286     host_addr =
287     mem->dev_dyntrans_data[i] +
288     (paddr & ~offset_mask);
289     }
290 dpavlin 28
291 dpavlin 12 cpu->update_translation_table(cpu,
292 dpavlin 18 vaddr & ~offset_mask, host_addr,
293 dpavlin 12 wf, orig_paddr & ~offset_mask);
294 dpavlin 2 }
295    
296 dpavlin 6 res = 0;
297     if (!no_exceptions || (mem->dev_flags[i] &
298 dpavlin 20 DM_READS_HAVE_NO_SIDE_EFFECTS))
299 dpavlin 6 res = mem->dev_f[i](cpu, mem, paddr,
300     data, len, writeflag,
301     mem->dev_extra[i]);
302 dpavlin 2
303     if (res == 0)
304     res = -1;
305    
306 dpavlin 6 #ifndef MEM_X86
307 dpavlin 2 /*
308     * If accessing the memory mapped device
309     * failed, then return with a DBE exception.
310     */
311 dpavlin 6 if (res <= 0 && !no_exceptions) {
312 dpavlin 2 debug("%s device '%s' addr %08lx "
313     "failed\n", writeflag?
314     "writing to" : "reading from",
315     mem->dev_name[i], (long)paddr);
316     #ifdef MEM_MIPS
317     mips_cpu_exception(cpu, EXCEPTION_DBE,
318     0, vaddr, 0, 0, 0, 0);
319     #endif
320     return MEMORY_ACCESS_FAILED;
321     }
322 dpavlin 6 #endif
323 dpavlin 2 goto do_return_ok;
324     }
325    
326 dpavlin 22 if (paddr < mem->dev_baseaddr[i])
327     end = i - 1;
328     if (paddr >= mem->dev_endaddr[i])
329     start = i + 1;
330     i = (start + end) >> 1;
331     } while (start <= end);
332 dpavlin 2 }
333    
334    
335     #ifdef MEM_MIPS
336     /*
337     * Data and instruction cache emulation:
338     */
339    
340     switch (cpu->cd.mips.cpu_type.mmu_model) {
341     case MMU3K:
342     /* if not uncached addess (TODO: generalize this) */
343 dpavlin 20 if (!(misc_flags & PHYSICAL) && cache != CACHE_NONE &&
344 dpavlin 2 !((vaddr & 0xffffffffULL) >= 0xa0000000ULL &&
345     (vaddr & 0xffffffffULL) <= 0xbfffffffULL)) {
346     if (memory_cache_R3000(cpu, cache, paddr,
347     writeflag, len, data))
348     goto do_return_ok;
349     }
350     break;
351     default:
352     /* R4000 etc */
353     /* TODO */
354     ;
355     }
356     #endif /* MEM_MIPS */
357    
358    
359     /* Outside of physical RAM? */
360     if (paddr >= mem->physical_max) {
361 dpavlin 6 #ifdef MEM_MIPS
362     if ((paddr & 0xffffc00000ULL) == 0x1fc00000) {
363 dpavlin 2 /* Ok, this is PROM stuff */
364     } else if ((paddr & 0xfffff00000ULL) == 0x1ff00000) {
365     /* Sprite reads from this area of memory... */
366     /* TODO: is this still correct? */
367     if (writeflag == MEM_READ)
368     memset(data, 0, len);
369     goto do_return_ok;
370 dpavlin 6 } else
371     #endif /* MIPS */
372     {
373     if (paddr >= mem->physical_max) {
374 dpavlin 24 uint64_t offset, old_pc = cpu->pc;
375 dpavlin 2 char *symbol;
376 dpavlin 12
377 dpavlin 6 /* This allows for example OS kernels to probe
378     memory a few KBs past the end of memory,
379     without giving too many warnings. */
380 dpavlin 12 if (!quiet_mode && !no_exceptions && paddr >=
381 dpavlin 6 mem->physical_max + 0x40000) {
382 dpavlin 2 fatal("[ memory_rw(): writeflag=%i ",
383     writeflag);
384     if (writeflag) {
385     unsigned int i;
386     debug("data={", writeflag);
387     if (len > 16) {
388     int start2 = len-16;
389     for (i=0; i<16; i++)
390     debug("%s%02x",
391     i?",":"",
392     data[i]);
393     debug(" .. ");
394     if (start2 < 16)
395     start2 = 16;
396     for (i=start2; i<len;
397     i++)
398     debug("%s%02x",
399     i?",":"",
400     data[i]);
401     } else
402     for (i=0; i<len; i++)
403     debug("%s%02x",
404     i?",":"",
405     data[i]);
406     debug("}");
407     }
408 dpavlin 12
409     fatal(" paddr=0x%llx >= physical_max"
410     "; pc=", (long long)paddr);
411     if (cpu->is_32bit)
412     fatal("0x%08x",(int)old_pc);
413     else
414     fatal("0x%016llx",
415     (long long)old_pc);
416 dpavlin 2 symbol = get_symbol_name(
417     &cpu->machine->symbol_context,
418 dpavlin 12 old_pc, &offset);
419     fatal(" <%s> ]\n",
420     symbol? symbol : " no symbol ");
421 dpavlin 2 }
422     }
423    
424     if (writeflag == MEM_READ) {
425 dpavlin 6 #ifdef MEM_X86
426     /* Reading non-existant memory on x86: */
427     memset(data, 0xff, len);
428     #else
429 dpavlin 2 /* Return all zeroes? (Or 0xff? TODO) */
430     memset(data, 0, len);
431 dpavlin 6 #endif
432 dpavlin 2
433     #ifdef MEM_MIPS
434     /*
435     * For real data/instruction accesses, cause
436     * an exceptions on an illegal read:
437     */
438     if (cache != CACHE_NONE && cpu->machine->
439 dpavlin 6 dbe_on_nonexistant_memaccess &&
440     !no_exceptions) {
441 dpavlin 2 if (paddr >= mem->physical_max &&
442     paddr < mem->physical_max+1048576)
443     mips_cpu_exception(cpu,
444     EXCEPTION_DBE, 0, vaddr, 0,
445     0, 0, 0);
446     }
447     #endif /* MEM_MIPS */
448     }
449    
450     /* Hm? Shouldn't there be a DBE exception for
451     invalid writes as well? TODO */
452    
453     goto do_return_ok;
454     }
455     }
456    
457     #endif /* ifndef MEM_USERLAND */
458    
459    
460     /*
461     * Uncached access:
462 dpavlin 18 *
463     * 1) Translate the physical address to a host address.
464     *
465     * 2) Insert this virtual->physical->host translation into the
466     * fast translation arrays (using update_translation_table()).
467     *
468     * 3) If this was a Write, then invalidate any code translations
469     * in that page.
470 dpavlin 2 */
471 dpavlin 28 memblock = memory_paddr_to_hostaddr(mem, paddr & ~offset_mask,
472     writeflag);
473 dpavlin 2 if (memblock == NULL) {
474     if (writeflag == MEM_READ)
475     memset(data, 0, len);
476     goto do_return_ok;
477     }
478    
479 dpavlin 28 offset = paddr & offset_mask;
480 dpavlin 2
481 dpavlin 22 if (cpu->update_translation_table != NULL && !dyntrans_device_danger
482 dpavlin 26 #ifdef MEM_MIPS
483     /* Ugly hack for R2000/R3000 caches: */
484     && (cpu->cd.mips.cpu_type.mmu_model != MMU3K ||
485     !(cpu->cd.mips.coproc[0]->reg[COP0_STATUS] & MIPS1_ISOL_CACHES))
486     #endif
487 dpavlin 18 #ifndef MEM_MIPS
488 dpavlin 20 /* && !(misc_flags & MEMORY_USER_ACCESS) */
489 dpavlin 18 #ifndef MEM_USERLAND
490     && !(ok & MEMORY_NOT_FULL_PAGE)
491     #endif
492     #endif
493 dpavlin 16 && !no_exceptions)
494 dpavlin 12 cpu->update_translation_table(cpu, vaddr & ~offset_mask,
495 dpavlin 28 memblock, (misc_flags & MEMORY_USER_ACCESS) |
496 dpavlin 20 #if !defined(MEM_MIPS) && !defined(MEM_USERLAND)
497 dpavlin 18 (cache == CACHE_INSTRUCTION?
498 dpavlin 20 (writeflag == MEM_WRITE? 1 : 0) : ok - 1),
499 dpavlin 2 #else
500 dpavlin 18 (writeflag == MEM_WRITE? 1 : 0),
501 dpavlin 2 #endif
502 dpavlin 12 paddr & ~offset_mask);
503 dpavlin 2
504 dpavlin 18 /* Invalidate code translations for the page we are writing to. */
505 dpavlin 20 if (writeflag == MEM_WRITE && cpu->invalidate_code_translation != NULL)
506 dpavlin 14 cpu->invalidate_code_translation(cpu, paddr, INVALIDATE_PADDR);
507    
508 dpavlin 28 if ((paddr&((1<<BITS_PER_MEMBLOCK)-1)) + len > (1<<BITS_PER_MEMBLOCK)) {
509     printf("Write over memblock boundary?\n");
510     exit(1);
511     }
512    
513 dpavlin 2 if (writeflag == MEM_WRITE) {
514 dpavlin 12 /* Ugly optimization, but it works: */
515     if (len == sizeof(uint32_t) && (offset & 3)==0
516     && ((size_t)data&3)==0)
517 dpavlin 2 *(uint32_t *)(memblock + offset) = *(uint32_t *)data;
518     else if (len == sizeof(uint8_t))
519     *(uint8_t *)(memblock + offset) = *(uint8_t *)data;
520     else
521     memcpy(memblock + offset, data, len);
522     } else {
523 dpavlin 12 /* Ugly optimization, but it works: */
524     if (len == sizeof(uint32_t) && (offset & 3)==0
525     && ((size_t)data&3)==0)
526 dpavlin 2 *(uint32_t *)data = *(uint32_t *)(memblock + offset);
527     else if (len == sizeof(uint8_t))
528     *(uint8_t *)data = *(uint8_t *)(memblock + offset);
529     else
530     memcpy(data, memblock + offset, len);
531     }
532    
533    
534     do_return_ok:
535     return MEMORY_ACCESS_OK;
536     }
537    

  ViewVC Help
Powered by ViewVC 1.1.26