First of all, saying that I'm a total newbie on this, so I won't be very
useful... or I might even be a obstacle. ?Take my advices with tweezers?.
For what I know, when you need to communicate several processes that
are run in
parallel, you need to package the data sent in the messages. This is done with
a doPacking() function, that packs the data in a CommunicationBuffer. This
doPacking() function is generated based on the message definition (.msg
files).
Specifically, in your case, from the IPControlInfo.msg. Take a look to the
autogenerated _m file. There appears a doPacking function that is the one who
is throwing the error. I believe that this doPacking() needs one of the
cObject
class methods (the cObject:netPack (cCommBuffer *buffer) method), and
IPAddress
is not a cObject implementation. In IPControlInfo.msg there appears ?class
noncobject IPAddress;? (line 22) so probably you can't pack it and so, you
can't send it.
I am not sure about this, but it may be a hint. Actually, I realize that I've
only told you about the ?why? of the error, but I didn't give you any
solution... I'm sorry, but as I said I'm a newbie, and I don't know much more
about this.
PS: If you (or anybody else that reads this email) manages to run an INET
simulation in parallel, please tell me (or us, if you want to send it to the
list)
Adios ;)
> Hello list
>
> I am interested in running simulations of very big networks, and I am
> running out of memory so I am trying to run them in parallel. I have
> already run some of the parallel examples in plain omnetpp, but when it
> comes to using the INET framework I get some weird error messages.
>
> Right now I am just trying to put a client machine (a StandarHost in the
> INET model) in one partition and a server (another StandardHost) in a
> second partition. They just generate some traffic, with the TCPEchoApp,
> and the model is just these two hosts and a link between them that
> crosses between partitions. In sequential mode everything is ok. In
> parallel, I can see the server listening to incoming TCP connections and
> the client opening a new TCP session, but when the simulator tries to
> send the first SYN packet through the link to the other partition of the
> parallel simulation, it fails with the following error:
>
> Error in module (PPP) bulkTransfer.client1.ppp[0].ppp: Parsim error: no
> doPacking() function for type IPAddress or its base class (check .msg
> and _m.cc/h files!)
>
> and then:
>
> Error occurred in procId=0: Parsim error: no doPacking() function for
> type IPAddress or its base class (check .msg and _m.cc/h files!)
>
> I have not seen any message in the list talking about parallel INET
> models. Does anyone tried to run a simulation like that? Does anyone
> know if the INET framework is parallelizable at all? Maybe I will have
> to get my hands into the code.
>
> If anyone wants to reproduce my results, I attach the simulation files
> to this message. It is based on the BulkTransfer example of the INET
> framework, but even more simplified.
>
> Thank you in advance.
> --
> Hay 10 tipos de personas,
> las que saben binario y las que no.
>
> Javier Celaya, Linux User #367634 /"\
> jcelaya AT gmail DOT com \ / Campaña del Lazo ASCII
> http://jcelaya.blogspot.com X contra el correo HTML
> JID: lothan at zgzjabber dot ath.cx / \
> jcelaya at gmail dot com
>
>
> //
> // Copyright (C) 2000 Institut fuer Telematik, Universitaet Karlsruhe
> //
> // This program is free software; you can redistribute it and/or
> // modify it under the terms of the GNU General Public License
> // as published by the Free Software Foundation; either version 2
> // of the License, or (at your option) any later version.
> //
> // This program is distributed in the hope that it will be useful,
> // but WITHOUT ANY WARRANTY; without even the implied warranty of
> // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> // GNU General Public License for more details.
> //
> // You should have received a copy of the GNU General Public License
> // along with this program; if not, write to the Free Software
> // Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
> 02111-1307, USA.
> //
>
>
> import
> "StandardHost";
>
>
> module BulkTransfer
> submodules:
> server: StandardHost;
> parameters:
> routingFile = "servidor.irt";
> //display: "p=131,247;i=device/pc2";
> client1: StandardHost;
> parameters:
> routingFile = "cliente.irt";
> //display: "p=131,67;i=device/pc3";
> connections:
> client1.out++ --> delay 200ms datarate 1000000 --> server.in++;
> client1.in++ <-- delay 200ms datarate 1000000 <-- server.out++;
> endmodule
>
> network bulkTransfer : BulkTransfer
> endnetwork
>
>
> ifconfig:
>
> # PPP con el router
> name: ppp0 inet_addr: 192.168.0.1 MTU: 1486 Metric: 1 POINTTOPOINT
>
> ifconfigend.
>
> route:
>
> # Destination Gateway Netmask Flags Metric Interface
> 192.168.0.2 * 255.255.255.255 H 0 ppp0
> default: * 0.0.0.0 G 0 ppp0
>
> routeend.
>
> [General]
> preload-ned-files = *.ned
>
> network = bulkTransfer
>
> total-stack-kb=7535
>
> parallel-simulation=true
> parsim-communications-class="cMPICommunications"
> parsim-synchronization-class= "cNullMessageProtocol"
>
>
> [Cmdenv]
> express-mode = no
>
>
> [Tkenv]
> plugin-path=../../../Etc/plugins
> default-run=1
>
>
> [Parameters]
>
> # udp app (off)
> **.numUdpApps=0
> **.udpAppType="UDPBasicApp"
>
> # tcp apps
> **.numTcpApps=1
> **.client*.tcpAppType="TCPSessionApp"
> **.client*.tcpApp[0].active=true
> **.client*.tcpApp[0].address="192.168.0.1"
> **.client*.tcpApp[0].port=-1
> **.client*.tcpApp[0].connectAddress="192.168.0.2"
> **.client*.tcpApp[0].connectPort=1000
> **.client*.tcpApp[0].tOpen=1.0
> **.client*.tcpApp[0].tSend=1.1
> **.client*.tcpApp[0].sendBytes=1000000 # 1Mb
> **.client*.tcpApp[0].sendScript=""
> **.client*.tcpApp[0].tClose=0
>
> #**.server*.tcpAppType="TCPSinkApp"
> **.server*.tcpAppType="TCPEchoApp"
> **.server*.tcpApp[0].address="192.168.0.2"
> **.server*.tcpApp[0].port=1000
> **.server*.tcpApp[0].echoFactor=2.0
> **.server*.tcpApp[0].echoDelay=0
>
> # ping app (off)
> **.pingApp.destAddr=""
> **.pingApp.srcAddr=""
> **.pingApp.packetSize=56
> **.pingApp.interval=1
> **.pingApp.hopLimit=32
> **.pingApp.count=0
> **.pingApp.startTime=1
> **.pingApp.stopTime=0
> **.pingApp.printPing=true
>
> # tcp settings
> **.tcp.mss = 1024
> **.tcp.advertisedWindow = 14336 # 14*mss
> **.tcp.sendQueueClass="TCPVirtualDataSendQueue"
> **.tcp.receiveQueueClass="TCPVirtualDataRcvQueue"
> **.tcp.tcpAlgorithmClass="TCPReno"
> **.tcp.recordStats=true
>
> # ip settings
> **.ip.procDelay=10us
> **.routingFile=""
> **.IPForward=false # Router's is hardwired "true"
>
> # ARP configuration
> **.arp.retryTimeout = 1
> **.arp.retryCount = 3
> **.arp.cacheTimeout = 100
> **.networkLayer.proxyARP = true # Host's is hardwired "false"
>
> # NIC configuration
> **.ppp[*].queueType = "DropTailQueue" # in routers
> **.ppp[*].queue.frameCapacity = 10 # in routers
>
> # nam trace
> **.nam.logfile = "trace.nam"
> **.nam.prolog = ""
> **.namid = -1 # auto
>
> [Partitioning]
> bulkTransfer.client1**.partition-id=0
> bulkTransfer.server**.partition-id=1
>
> ifconfig:
>
> # PPP con el router
> name: ppp0 inet_addr: 192.168.0.2 MTU: 1486 Metric: 1 POINTTOPOINT
>
> ifconfigend.
>
> route:
>
> # Destination Gateway Netmask Flags Metric Interface
> 192.168.0.1 * 255.255.255.255 H 0 ppp0
> default: * 0.0.0.0 G 0 ppp0
>
> routeend.
_______________________________________________
OMNeT++ Mailing List
options: http://lists.omnetpp.org/mailman/listinfo/omnetpp-l
archive: http://www.omnetpp.org/listarchive/index.php
I would appreciate help from anyone who can take me off this mailing list.
Regards,
Stephan
void doPacking(cCommBuffer *buffer, IPAddress& addr) {
buffer->pack(addr.getInt());
}
void doUnpacking (cCommBuffer *buffer, IPAddress& addr) {
uint32 value;
buffer->unpack(value);
addr.set(value);
}
You can put this code into the .msg file (within double braces {{...}}, and
then it'll be just copied into the _m.cc file).
The error message "no doPacking() function" came from the generated _m.cc
file which contains a generic template rule that fires if there's no
specific doPack() function already (like those above).
Andras